00:00:00.001 Started by upstream project "autotest-per-patch" build number 132511 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.116 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.117 The recommended git tool is: git 00:00:00.117 using credential 00000000-0000-0000-0000-000000000002 00:00:00.119 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.222 Fetching changes from the remote Git repository 00:00:00.229 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.277 Using shallow fetch with depth 1 00:00:00.277 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.277 > git --version # timeout=10 00:00:00.312 > git --version # 'git version 2.39.2' 00:00:00.312 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.334 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.334 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.978 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.993 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.006 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.006 > git config core.sparsecheckout # timeout=10 00:00:06.018 > git read-tree -mu HEAD # timeout=10 00:00:06.033 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.055 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.055 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.147 [Pipeline] Start of Pipeline 00:00:06.158 [Pipeline] library 00:00:06.159 Loading library shm_lib@master 00:00:06.159 Library shm_lib@master is cached. Copying from home. 00:00:06.174 [Pipeline] node 00:00:06.184 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.185 [Pipeline] { 00:00:06.194 [Pipeline] catchError 00:00:06.195 [Pipeline] { 00:00:06.205 [Pipeline] wrap 00:00:06.213 [Pipeline] { 00:00:06.220 [Pipeline] stage 00:00:06.221 [Pipeline] { (Prologue) 00:00:06.409 [Pipeline] sh 00:00:06.698 + logger -p user.info -t JENKINS-CI 00:00:06.717 [Pipeline] echo 00:00:06.718 Node: CYP9 00:00:06.726 [Pipeline] sh 00:00:07.033 [Pipeline] setCustomBuildProperty 00:00:07.040 [Pipeline] echo 00:00:07.041 Cleanup processes 00:00:07.045 [Pipeline] sh 00:00:07.331 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.331 3103908 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.346 [Pipeline] sh 00:00:07.635 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.635 ++ grep -v 'sudo pgrep' 00:00:07.635 ++ awk '{print $1}' 00:00:07.635 + sudo kill -9 00:00:07.635 + true 00:00:07.650 [Pipeline] cleanWs 00:00:07.661 [WS-CLEANUP] Deleting project workspace... 00:00:07.661 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.669 [WS-CLEANUP] done 00:00:07.674 [Pipeline] setCustomBuildProperty 00:00:07.688 [Pipeline] sh 00:00:07.976 + sudo git config --global --replace-all safe.directory '*' 00:00:08.072 [Pipeline] httpRequest 00:00:08.452 [Pipeline] echo 00:00:08.453 Sorcerer 10.211.164.20 is alive 00:00:08.462 [Pipeline] retry 00:00:08.464 [Pipeline] { 00:00:08.477 [Pipeline] httpRequest 00:00:08.482 HttpMethod: GET 00:00:08.482 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.483 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.506 Response Code: HTTP/1.1 200 OK 00:00:08.507 Success: Status code 200 is in the accepted range: 200,404 00:00:08.507 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:47.904 [Pipeline] } 00:00:47.922 [Pipeline] // retry 00:00:47.929 [Pipeline] sh 00:00:48.215 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:48.234 [Pipeline] httpRequest 00:00:48.601 [Pipeline] echo 00:00:48.603 Sorcerer 10.211.164.20 is alive 00:00:48.614 [Pipeline] retry 00:00:48.616 [Pipeline] { 00:00:48.633 [Pipeline] httpRequest 00:00:48.638 HttpMethod: GET 00:00:48.639 URL: http://10.211.164.20/packages/spdk_9d382c2520ce2a2b1022642bdc007de02d4ab224.tar.gz 00:00:48.639 Sending request to url: http://10.211.164.20/packages/spdk_9d382c2520ce2a2b1022642bdc007de02d4ab224.tar.gz 00:00:48.646 Response Code: HTTP/1.1 200 OK 00:00:48.647 Success: Status code 200 is in the accepted range: 200,404 00:00:48.647 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_9d382c2520ce2a2b1022642bdc007de02d4ab224.tar.gz 00:06:15.512 [Pipeline] } 00:06:15.529 [Pipeline] // retry 00:06:15.535 [Pipeline] sh 00:06:15.822 + tar --no-same-owner -xf spdk_9d382c2520ce2a2b1022642bdc007de02d4ab224.tar.gz 00:06:19.144 [Pipeline] sh 00:06:19.432 + git -C spdk log --oneline -n5 00:06:19.432 9d382c252 bdev/nvme: use poll_group's fd_group to register interrupts 00:06:19.432 472bfc460 nvme: add poll_group interrupt callback 00:06:19.432 9211e340a nvme: add spdk_nvme_poll_group_get_fd_group() 00:06:19.432 72504c426 thread: fd_group-based interrupts 00:06:19.432 b95709785 thread: move interrupt allocation to a function 00:06:19.444 [Pipeline] } 00:06:19.459 [Pipeline] // stage 00:06:19.468 [Pipeline] stage 00:06:19.470 [Pipeline] { (Prepare) 00:06:19.486 [Pipeline] writeFile 00:06:19.501 [Pipeline] sh 00:06:19.791 + logger -p user.info -t JENKINS-CI 00:06:19.803 [Pipeline] sh 00:06:20.092 + logger -p user.info -t JENKINS-CI 00:06:20.105 [Pipeline] sh 00:06:20.392 + cat autorun-spdk.conf 00:06:20.392 SPDK_RUN_FUNCTIONAL_TEST=1 00:06:20.392 SPDK_TEST_NVMF=1 00:06:20.392 SPDK_TEST_NVME_CLI=1 00:06:20.392 SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:20.392 SPDK_TEST_NVMF_NICS=e810 00:06:20.392 SPDK_TEST_VFIOUSER=1 00:06:20.392 SPDK_RUN_UBSAN=1 00:06:20.392 NET_TYPE=phy 00:06:20.401 RUN_NIGHTLY=0 00:06:20.405 [Pipeline] readFile 00:06:20.427 [Pipeline] withEnv 00:06:20.429 [Pipeline] { 00:06:20.441 [Pipeline] sh 00:06:20.730 + set -ex 00:06:20.730 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:06:20.730 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:20.730 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:20.730 ++ SPDK_TEST_NVMF=1 00:06:20.730 ++ SPDK_TEST_NVME_CLI=1 00:06:20.730 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:20.730 ++ SPDK_TEST_NVMF_NICS=e810 00:06:20.730 ++ SPDK_TEST_VFIOUSER=1 00:06:20.730 ++ SPDK_RUN_UBSAN=1 00:06:20.730 ++ NET_TYPE=phy 00:06:20.730 ++ RUN_NIGHTLY=0 00:06:20.730 + case $SPDK_TEST_NVMF_NICS in 00:06:20.730 + DRIVERS=ice 00:06:20.730 + [[ tcp == \r\d\m\a ]] 00:06:20.730 + [[ -n ice ]] 00:06:20.730 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:06:20.730 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:06:20.730 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:06:20.730 rmmod: ERROR: Module irdma is not currently loaded 00:06:20.730 rmmod: ERROR: Module i40iw is not currently loaded 00:06:20.730 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:06:20.730 + true 00:06:20.730 + for D in $DRIVERS 00:06:20.730 + sudo modprobe ice 00:06:20.730 + exit 0 00:06:20.740 [Pipeline] } 00:06:20.756 [Pipeline] // withEnv 00:06:20.760 [Pipeline] } 00:06:20.772 [Pipeline] // stage 00:06:20.782 [Pipeline] catchError 00:06:20.784 [Pipeline] { 00:06:20.797 [Pipeline] timeout 00:06:20.798 Timeout set to expire in 1 hr 0 min 00:06:20.799 [Pipeline] { 00:06:20.812 [Pipeline] stage 00:06:20.814 [Pipeline] { (Tests) 00:06:20.826 [Pipeline] sh 00:06:21.117 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:06:21.117 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:06:21.117 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:06:21.117 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:06:21.117 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:21.117 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:06:21.117 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:06:21.117 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:06:21.117 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:06:21.117 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:06:21.117 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:06:21.117 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:06:21.117 + source /etc/os-release 00:06:21.117 ++ NAME='Fedora Linux' 00:06:21.117 ++ VERSION='39 (Cloud Edition)' 00:06:21.117 ++ ID=fedora 00:06:21.117 ++ VERSION_ID=39 00:06:21.117 ++ VERSION_CODENAME= 00:06:21.117 ++ PLATFORM_ID=platform:f39 00:06:21.117 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:21.117 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:21.117 ++ LOGO=fedora-logo-icon 00:06:21.117 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:21.117 ++ HOME_URL=https://fedoraproject.org/ 00:06:21.117 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:21.117 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:21.117 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:21.117 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:21.117 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:21.117 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:21.117 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:21.117 ++ SUPPORT_END=2024-11-12 00:06:21.117 ++ VARIANT='Cloud Edition' 00:06:21.117 ++ VARIANT_ID=cloud 00:06:21.117 + uname -a 00:06:21.117 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:06:21.117 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:24.431 Hugepages 00:06:24.431 node hugesize free / total 00:06:24.431 node0 1048576kB 0 / 0 00:06:24.431 node0 2048kB 0 / 0 00:06:24.431 node1 1048576kB 0 / 0 00:06:24.431 node1 2048kB 0 / 0 00:06:24.431 00:06:24.431 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:24.431 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:06:24.431 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:06:24.431 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:06:24.431 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:06:24.431 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:06:24.431 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:06:24.431 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:06:24.431 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:06:24.431 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:06:24.431 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:06:24.431 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:06:24.431 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:06:24.431 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:06:24.431 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:06:24.431 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:06:24.431 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:06:24.431 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:06:24.431 + rm -f /tmp/spdk-ld-path 00:06:24.431 + source autorun-spdk.conf 00:06:24.431 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:24.431 ++ SPDK_TEST_NVMF=1 00:06:24.431 ++ SPDK_TEST_NVME_CLI=1 00:06:24.431 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:24.431 ++ SPDK_TEST_NVMF_NICS=e810 00:06:24.431 ++ SPDK_TEST_VFIOUSER=1 00:06:24.431 ++ SPDK_RUN_UBSAN=1 00:06:24.431 ++ NET_TYPE=phy 00:06:24.431 ++ RUN_NIGHTLY=0 00:06:24.431 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:24.431 + [[ -n '' ]] 00:06:24.431 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:24.431 + for M in /var/spdk/build-*-manifest.txt 00:06:24.431 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:24.431 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:06:24.431 + for M in /var/spdk/build-*-manifest.txt 00:06:24.431 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:24.431 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:06:24.431 + for M in /var/spdk/build-*-manifest.txt 00:06:24.431 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:24.431 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:06:24.431 ++ uname 00:06:24.431 + [[ Linux == \L\i\n\u\x ]] 00:06:24.431 + sudo dmesg -T 00:06:24.431 + sudo dmesg --clear 00:06:24.431 + dmesg_pid=3106073 00:06:24.431 + [[ Fedora Linux == FreeBSD ]] 00:06:24.432 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:24.432 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:24.432 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:24.432 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:06:24.432 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:06:24.432 + [[ -x /usr/src/fio-static/fio ]] 00:06:24.432 + sudo dmesg -Tw 00:06:24.432 + export FIO_BIN=/usr/src/fio-static/fio 00:06:24.432 + FIO_BIN=/usr/src/fio-static/fio 00:06:24.432 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:24.432 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:24.432 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:24.432 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:24.432 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:24.432 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:24.432 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:24.432 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:24.432 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:24.692 14:04:29 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:06:24.692 14:04:29 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:24.692 14:04:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:24.692 14:04:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:06:24.692 14:04:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:06:24.692 14:04:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:24.692 14:04:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:06:24.692 14:04:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:06:24.692 14:04:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:06:24.692 14:04:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:06:24.692 14:04:29 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:06:24.692 14:04:29 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:06:24.692 14:04:29 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:24.692 14:04:29 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:06:24.692 14:04:29 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:24.692 14:04:29 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:24.692 14:04:29 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:24.692 14:04:29 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.692 14:04:29 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.692 14:04:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.692 14:04:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.692 14:04:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.692 14:04:29 -- paths/export.sh@5 -- $ export PATH 00:06:24.692 14:04:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.692 14:04:29 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:06:24.692 14:04:29 -- common/autobuild_common.sh@493 -- $ date +%s 00:06:24.692 14:04:29 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732539869.XXXXXX 00:06:24.693 14:04:29 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732539869.UXPM8k 00:06:24.693 14:04:29 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:06:24.693 14:04:29 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:06:24.693 14:04:29 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:06:24.693 14:04:29 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:06:24.693 14:04:29 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:06:24.693 14:04:29 -- common/autobuild_common.sh@509 -- $ get_config_params 00:06:24.693 14:04:29 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:06:24.693 14:04:29 -- common/autotest_common.sh@10 -- $ set +x 00:06:24.693 14:04:29 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:06:24.693 14:04:29 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:06:24.693 14:04:29 -- pm/common@17 -- $ local monitor 00:06:24.693 14:04:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:24.693 14:04:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:24.693 14:04:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:24.693 14:04:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:24.693 14:04:29 -- pm/common@21 -- $ date +%s 00:06:24.693 14:04:29 -- pm/common@25 -- $ sleep 1 00:06:24.693 14:04:29 -- pm/common@21 -- $ date +%s 00:06:24.693 14:04:29 -- pm/common@21 -- $ date +%s 00:06:24.693 14:04:29 -- pm/common@21 -- $ date +%s 00:06:24.693 14:04:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732539869 00:06:24.693 14:04:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732539869 00:06:24.693 14:04:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732539869 00:06:24.693 14:04:29 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732539869 00:06:24.693 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732539869_collect-cpu-load.pm.log 00:06:24.693 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732539869_collect-vmstat.pm.log 00:06:24.693 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732539869_collect-cpu-temp.pm.log 00:06:24.693 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732539869_collect-bmc-pm.bmc.pm.log 00:06:25.635 14:04:30 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:06:25.635 14:04:30 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:25.635 14:04:30 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:25.635 14:04:30 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:25.635 14:04:30 -- spdk/autobuild.sh@16 -- $ date -u 00:06:25.635 Mon Nov 25 01:04:30 PM UTC 2024 00:06:25.635 14:04:30 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:25.635 v25.01-pre-227-g9d382c252 00:06:25.635 14:04:30 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:06:25.635 14:04:30 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:25.635 14:04:30 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:25.635 14:04:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:25.635 14:04:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:25.635 14:04:30 -- common/autotest_common.sh@10 -- $ set +x 00:06:25.635 ************************************ 00:06:25.635 START TEST ubsan 00:06:25.635 ************************************ 00:06:25.635 14:04:30 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:06:25.635 using ubsan 00:06:25.635 00:06:25.635 real 0m0.001s 00:06:25.635 user 0m0.000s 00:06:25.635 sys 0m0.001s 00:06:25.635 14:04:30 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:25.635 14:04:30 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:06:25.635 ************************************ 00:06:25.635 END TEST ubsan 00:06:25.635 ************************************ 00:06:25.897 14:04:30 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:25.897 14:04:30 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:25.897 14:04:30 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:25.897 14:04:30 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:25.897 14:04:30 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:25.897 14:04:30 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:25.897 14:04:30 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:25.897 14:04:30 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:25.897 14:04:30 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:06:25.897 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:25.897 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:26.470 Using 'verbs' RDMA provider 00:06:41.957 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:06:56.880 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:06:56.880 Creating mk/config.mk...done. 00:06:56.880 Creating mk/cc.flags.mk...done. 00:06:56.880 Type 'make' to build. 00:06:56.880 14:05:00 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:06:56.880 14:05:00 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:56.880 14:05:00 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:56.880 14:05:00 -- common/autotest_common.sh@10 -- $ set +x 00:06:56.880 ************************************ 00:06:56.880 START TEST make 00:06:56.880 ************************************ 00:06:56.880 14:05:00 make -- common/autotest_common.sh@1129 -- $ make -j144 00:06:56.880 make[1]: Nothing to be done for 'all'. 00:06:56.880 The Meson build system 00:06:56.880 Version: 1.5.0 00:06:56.880 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:06:56.880 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:56.880 Build type: native build 00:06:56.880 Project name: libvfio-user 00:06:56.880 Project version: 0.0.1 00:06:56.880 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:56.880 C linker for the host machine: cc ld.bfd 2.40-14 00:06:56.880 Host machine cpu family: x86_64 00:06:56.880 Host machine cpu: x86_64 00:06:56.880 Run-time dependency threads found: YES 00:06:56.880 Library dl found: YES 00:06:56.880 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:56.880 Run-time dependency json-c found: YES 0.17 00:06:56.880 Run-time dependency cmocka found: YES 1.1.7 00:06:56.880 Program pytest-3 found: NO 00:06:56.880 Program flake8 found: NO 00:06:56.880 Program misspell-fixer found: NO 00:06:56.880 Program restructuredtext-lint found: NO 00:06:56.880 Program valgrind found: YES (/usr/bin/valgrind) 00:06:56.880 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:56.880 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:56.880 Compiler for C supports arguments -Wwrite-strings: YES 00:06:56.880 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:06:56.880 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:06:56.880 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:06:56.880 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:06:56.880 Build targets in project: 8 00:06:56.880 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:06:56.880 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:06:56.880 00:06:56.880 libvfio-user 0.0.1 00:06:56.880 00:06:56.880 User defined options 00:06:56.880 buildtype : debug 00:06:56.880 default_library: shared 00:06:56.880 libdir : /usr/local/lib 00:06:56.880 00:06:56.880 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:57.451 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:06:57.451 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:06:57.451 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:06:57.451 [3/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:06:57.451 [4/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:06:57.451 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:06:57.451 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:06:57.451 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:06:57.451 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:06:57.451 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:06:57.451 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:06:57.451 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:06:57.451 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:06:57.451 [13/37] Compiling C object test/unit_tests.p/mocks.c.o 00:06:57.451 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:06:57.451 [15/37] Compiling C object samples/null.p/null.c.o 00:06:57.451 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:06:57.451 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:06:57.451 [18/37] Compiling C object samples/server.p/server.c.o 00:06:57.451 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:06:57.451 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:06:57.451 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:06:57.451 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:06:57.451 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:06:57.451 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:06:57.712 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:06:57.712 [26/37] Compiling C object samples/client.p/client.c.o 00:06:57.712 [27/37] Linking target samples/client 00:06:57.712 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:06:57.712 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:06:57.712 [30/37] Linking target test/unit_tests 00:06:57.712 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:06:57.972 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:06:57.972 [33/37] Linking target samples/server 00:06:57.972 [34/37] Linking target samples/null 00:06:57.972 [35/37] Linking target samples/gpio-pci-idio-16 00:06:57.972 [36/37] Linking target samples/lspci 00:06:57.972 [37/37] Linking target samples/shadow_ioeventfd_server 00:06:57.972 INFO: autodetecting backend as ninja 00:06:57.972 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:57.972 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:58.233 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:06:58.233 ninja: no work to do. 00:07:04.839 The Meson build system 00:07:04.839 Version: 1.5.0 00:07:04.839 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:07:04.839 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:07:04.839 Build type: native build 00:07:04.839 Program cat found: YES (/usr/bin/cat) 00:07:04.839 Project name: DPDK 00:07:04.839 Project version: 24.03.0 00:07:04.839 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:04.839 C linker for the host machine: cc ld.bfd 2.40-14 00:07:04.839 Host machine cpu family: x86_64 00:07:04.839 Host machine cpu: x86_64 00:07:04.839 Message: ## Building in Developer Mode ## 00:07:04.839 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:04.839 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:07:04.839 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:07:04.839 Program python3 found: YES (/usr/bin/python3) 00:07:04.839 Program cat found: YES (/usr/bin/cat) 00:07:04.839 Compiler for C supports arguments -march=native: YES 00:07:04.839 Checking for size of "void *" : 8 00:07:04.839 Checking for size of "void *" : 8 (cached) 00:07:04.839 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:07:04.839 Library m found: YES 00:07:04.839 Library numa found: YES 00:07:04.839 Has header "numaif.h" : YES 00:07:04.839 Library fdt found: NO 00:07:04.839 Library execinfo found: NO 00:07:04.839 Has header "execinfo.h" : YES 00:07:04.839 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:04.839 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:04.839 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:04.839 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:04.839 Run-time dependency openssl found: YES 3.1.1 00:07:04.839 Run-time dependency libpcap found: YES 1.10.4 00:07:04.839 Has header "pcap.h" with dependency libpcap: YES 00:07:04.839 Compiler for C supports arguments -Wcast-qual: YES 00:07:04.839 Compiler for C supports arguments -Wdeprecated: YES 00:07:04.839 Compiler for C supports arguments -Wformat: YES 00:07:04.839 Compiler for C supports arguments -Wformat-nonliteral: NO 00:07:04.839 Compiler for C supports arguments -Wformat-security: NO 00:07:04.839 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:04.839 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:04.839 Compiler for C supports arguments -Wnested-externs: YES 00:07:04.839 Compiler for C supports arguments -Wold-style-definition: YES 00:07:04.839 Compiler for C supports arguments -Wpointer-arith: YES 00:07:04.839 Compiler for C supports arguments -Wsign-compare: YES 00:07:04.839 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:04.839 Compiler for C supports arguments -Wundef: YES 00:07:04.839 Compiler for C supports arguments -Wwrite-strings: YES 00:07:04.839 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:04.839 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:07:04.839 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:04.840 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:07:04.840 Program objdump found: YES (/usr/bin/objdump) 00:07:04.840 Compiler for C supports arguments -mavx512f: YES 00:07:04.840 Checking if "AVX512 checking" compiles: YES 00:07:04.840 Fetching value of define "__SSE4_2__" : 1 00:07:04.840 Fetching value of define "__AES__" : 1 00:07:04.840 Fetching value of define "__AVX__" : 1 00:07:04.840 Fetching value of define "__AVX2__" : 1 00:07:04.840 Fetching value of define "__AVX512BW__" : 1 00:07:04.840 Fetching value of define "__AVX512CD__" : 1 00:07:04.840 Fetching value of define "__AVX512DQ__" : 1 00:07:04.840 Fetching value of define "__AVX512F__" : 1 00:07:04.840 Fetching value of define "__AVX512VL__" : 1 00:07:04.840 Fetching value of define "__PCLMUL__" : 1 00:07:04.840 Fetching value of define "__RDRND__" : 1 00:07:04.840 Fetching value of define "__RDSEED__" : 1 00:07:04.840 Fetching value of define "__VPCLMULQDQ__" : 1 00:07:04.840 Fetching value of define "__znver1__" : (undefined) 00:07:04.840 Fetching value of define "__znver2__" : (undefined) 00:07:04.840 Fetching value of define "__znver3__" : (undefined) 00:07:04.840 Fetching value of define "__znver4__" : (undefined) 00:07:04.840 Compiler for C supports arguments -Wno-format-truncation: YES 00:07:04.840 Message: lib/log: Defining dependency "log" 00:07:04.840 Message: lib/kvargs: Defining dependency "kvargs" 00:07:04.840 Message: lib/telemetry: Defining dependency "telemetry" 00:07:04.840 Checking for function "getentropy" : NO 00:07:04.840 Message: lib/eal: Defining dependency "eal" 00:07:04.840 Message: lib/ring: Defining dependency "ring" 00:07:04.840 Message: lib/rcu: Defining dependency "rcu" 00:07:04.840 Message: lib/mempool: Defining dependency "mempool" 00:07:04.840 Message: lib/mbuf: Defining dependency "mbuf" 00:07:04.840 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:04.840 Fetching value of define "__AVX512F__" : 1 (cached) 00:07:04.840 Fetching value of define "__AVX512BW__" : 1 (cached) 00:07:04.840 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:07:04.840 Fetching value of define "__AVX512VL__" : 1 (cached) 00:07:04.840 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:07:04.840 Compiler for C supports arguments -mpclmul: YES 00:07:04.840 Compiler for C supports arguments -maes: YES 00:07:04.840 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:04.840 Compiler for C supports arguments -mavx512bw: YES 00:07:04.840 Compiler for C supports arguments -mavx512dq: YES 00:07:04.840 Compiler for C supports arguments -mavx512vl: YES 00:07:04.840 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:04.840 Compiler for C supports arguments -mavx2: YES 00:07:04.840 Compiler for C supports arguments -mavx: YES 00:07:04.840 Message: lib/net: Defining dependency "net" 00:07:04.840 Message: lib/meter: Defining dependency "meter" 00:07:04.840 Message: lib/ethdev: Defining dependency "ethdev" 00:07:04.840 Message: lib/pci: Defining dependency "pci" 00:07:04.840 Message: lib/cmdline: Defining dependency "cmdline" 00:07:04.840 Message: lib/hash: Defining dependency "hash" 00:07:04.840 Message: lib/timer: Defining dependency "timer" 00:07:04.840 Message: lib/compressdev: Defining dependency "compressdev" 00:07:04.840 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:04.840 Message: lib/dmadev: Defining dependency "dmadev" 00:07:04.840 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:04.840 Message: lib/power: Defining dependency "power" 00:07:04.840 Message: lib/reorder: Defining dependency "reorder" 00:07:04.840 Message: lib/security: Defining dependency "security" 00:07:04.840 Has header "linux/userfaultfd.h" : YES 00:07:04.840 Has header "linux/vduse.h" : YES 00:07:04.840 Message: lib/vhost: Defining dependency "vhost" 00:07:04.840 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:07:04.840 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:04.840 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:04.840 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:04.840 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:07:04.840 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:07:04.840 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:07:04.840 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:07:04.840 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:07:04.840 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:07:04.840 Program doxygen found: YES (/usr/local/bin/doxygen) 00:07:04.840 Configuring doxy-api-html.conf using configuration 00:07:04.840 Configuring doxy-api-man.conf using configuration 00:07:04.840 Program mandb found: YES (/usr/bin/mandb) 00:07:04.840 Program sphinx-build found: NO 00:07:04.840 Configuring rte_build_config.h using configuration 00:07:04.840 Message: 00:07:04.840 ================= 00:07:04.840 Applications Enabled 00:07:04.840 ================= 00:07:04.840 00:07:04.840 apps: 00:07:04.840 00:07:04.840 00:07:04.840 Message: 00:07:04.840 ================= 00:07:04.840 Libraries Enabled 00:07:04.840 ================= 00:07:04.840 00:07:04.840 libs: 00:07:04.840 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:04.840 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:07:04.840 cryptodev, dmadev, power, reorder, security, vhost, 00:07:04.840 00:07:04.840 Message: 00:07:04.840 =============== 00:07:04.840 Drivers Enabled 00:07:04.840 =============== 00:07:04.840 00:07:04.840 common: 00:07:04.840 00:07:04.840 bus: 00:07:04.840 pci, vdev, 00:07:04.840 mempool: 00:07:04.840 ring, 00:07:04.840 dma: 00:07:04.840 00:07:04.840 net: 00:07:04.840 00:07:04.840 crypto: 00:07:04.840 00:07:04.840 compress: 00:07:04.840 00:07:04.840 vdpa: 00:07:04.840 00:07:04.840 00:07:04.840 Message: 00:07:04.840 ================= 00:07:04.840 Content Skipped 00:07:04.840 ================= 00:07:04.840 00:07:04.840 apps: 00:07:04.840 dumpcap: explicitly disabled via build config 00:07:04.840 graph: explicitly disabled via build config 00:07:04.840 pdump: explicitly disabled via build config 00:07:04.840 proc-info: explicitly disabled via build config 00:07:04.840 test-acl: explicitly disabled via build config 00:07:04.840 test-bbdev: explicitly disabled via build config 00:07:04.840 test-cmdline: explicitly disabled via build config 00:07:04.840 test-compress-perf: explicitly disabled via build config 00:07:04.840 test-crypto-perf: explicitly disabled via build config 00:07:04.840 test-dma-perf: explicitly disabled via build config 00:07:04.840 test-eventdev: explicitly disabled via build config 00:07:04.840 test-fib: explicitly disabled via build config 00:07:04.840 test-flow-perf: explicitly disabled via build config 00:07:04.840 test-gpudev: explicitly disabled via build config 00:07:04.840 test-mldev: explicitly disabled via build config 00:07:04.840 test-pipeline: explicitly disabled via build config 00:07:04.840 test-pmd: explicitly disabled via build config 00:07:04.840 test-regex: explicitly disabled via build config 00:07:04.840 test-sad: explicitly disabled via build config 00:07:04.840 test-security-perf: explicitly disabled via build config 00:07:04.840 00:07:04.840 libs: 00:07:04.840 argparse: explicitly disabled via build config 00:07:04.840 metrics: explicitly disabled via build config 00:07:04.840 acl: explicitly disabled via build config 00:07:04.840 bbdev: explicitly disabled via build config 00:07:04.840 bitratestats: explicitly disabled via build config 00:07:04.840 bpf: explicitly disabled via build config 00:07:04.840 cfgfile: explicitly disabled via build config 00:07:04.840 distributor: explicitly disabled via build config 00:07:04.840 efd: explicitly disabled via build config 00:07:04.840 eventdev: explicitly disabled via build config 00:07:04.840 dispatcher: explicitly disabled via build config 00:07:04.840 gpudev: explicitly disabled via build config 00:07:04.840 gro: explicitly disabled via build config 00:07:04.840 gso: explicitly disabled via build config 00:07:04.840 ip_frag: explicitly disabled via build config 00:07:04.840 jobstats: explicitly disabled via build config 00:07:04.840 latencystats: explicitly disabled via build config 00:07:04.840 lpm: explicitly disabled via build config 00:07:04.840 member: explicitly disabled via build config 00:07:04.840 pcapng: explicitly disabled via build config 00:07:04.840 rawdev: explicitly disabled via build config 00:07:04.840 regexdev: explicitly disabled via build config 00:07:04.840 mldev: explicitly disabled via build config 00:07:04.840 rib: explicitly disabled via build config 00:07:04.840 sched: explicitly disabled via build config 00:07:04.840 stack: explicitly disabled via build config 00:07:04.840 ipsec: explicitly disabled via build config 00:07:04.840 pdcp: explicitly disabled via build config 00:07:04.840 fib: explicitly disabled via build config 00:07:04.840 port: explicitly disabled via build config 00:07:04.840 pdump: explicitly disabled via build config 00:07:04.840 table: explicitly disabled via build config 00:07:04.840 pipeline: explicitly disabled via build config 00:07:04.840 graph: explicitly disabled via build config 00:07:04.840 node: explicitly disabled via build config 00:07:04.840 00:07:04.840 drivers: 00:07:04.840 common/cpt: not in enabled drivers build config 00:07:04.840 common/dpaax: not in enabled drivers build config 00:07:04.840 common/iavf: not in enabled drivers build config 00:07:04.840 common/idpf: not in enabled drivers build config 00:07:04.840 common/ionic: not in enabled drivers build config 00:07:04.840 common/mvep: not in enabled drivers build config 00:07:04.840 common/octeontx: not in enabled drivers build config 00:07:04.840 bus/auxiliary: not in enabled drivers build config 00:07:04.840 bus/cdx: not in enabled drivers build config 00:07:04.840 bus/dpaa: not in enabled drivers build config 00:07:04.840 bus/fslmc: not in enabled drivers build config 00:07:04.840 bus/ifpga: not in enabled drivers build config 00:07:04.841 bus/platform: not in enabled drivers build config 00:07:04.841 bus/uacce: not in enabled drivers build config 00:07:04.841 bus/vmbus: not in enabled drivers build config 00:07:04.841 common/cnxk: not in enabled drivers build config 00:07:04.841 common/mlx5: not in enabled drivers build config 00:07:04.841 common/nfp: not in enabled drivers build config 00:07:04.841 common/nitrox: not in enabled drivers build config 00:07:04.841 common/qat: not in enabled drivers build config 00:07:04.841 common/sfc_efx: not in enabled drivers build config 00:07:04.841 mempool/bucket: not in enabled drivers build config 00:07:04.841 mempool/cnxk: not in enabled drivers build config 00:07:04.841 mempool/dpaa: not in enabled drivers build config 00:07:04.841 mempool/dpaa2: not in enabled drivers build config 00:07:04.841 mempool/octeontx: not in enabled drivers build config 00:07:04.841 mempool/stack: not in enabled drivers build config 00:07:04.841 dma/cnxk: not in enabled drivers build config 00:07:04.841 dma/dpaa: not in enabled drivers build config 00:07:04.841 dma/dpaa2: not in enabled drivers build config 00:07:04.841 dma/hisilicon: not in enabled drivers build config 00:07:04.841 dma/idxd: not in enabled drivers build config 00:07:04.841 dma/ioat: not in enabled drivers build config 00:07:04.841 dma/skeleton: not in enabled drivers build config 00:07:04.841 net/af_packet: not in enabled drivers build config 00:07:04.841 net/af_xdp: not in enabled drivers build config 00:07:04.841 net/ark: not in enabled drivers build config 00:07:04.841 net/atlantic: not in enabled drivers build config 00:07:04.841 net/avp: not in enabled drivers build config 00:07:04.841 net/axgbe: not in enabled drivers build config 00:07:04.841 net/bnx2x: not in enabled drivers build config 00:07:04.841 net/bnxt: not in enabled drivers build config 00:07:04.841 net/bonding: not in enabled drivers build config 00:07:04.841 net/cnxk: not in enabled drivers build config 00:07:04.841 net/cpfl: not in enabled drivers build config 00:07:04.841 net/cxgbe: not in enabled drivers build config 00:07:04.841 net/dpaa: not in enabled drivers build config 00:07:04.841 net/dpaa2: not in enabled drivers build config 00:07:04.841 net/e1000: not in enabled drivers build config 00:07:04.841 net/ena: not in enabled drivers build config 00:07:04.841 net/enetc: not in enabled drivers build config 00:07:04.841 net/enetfec: not in enabled drivers build config 00:07:04.841 net/enic: not in enabled drivers build config 00:07:04.841 net/failsafe: not in enabled drivers build config 00:07:04.841 net/fm10k: not in enabled drivers build config 00:07:04.841 net/gve: not in enabled drivers build config 00:07:04.841 net/hinic: not in enabled drivers build config 00:07:04.841 net/hns3: not in enabled drivers build config 00:07:04.841 net/i40e: not in enabled drivers build config 00:07:04.841 net/iavf: not in enabled drivers build config 00:07:04.841 net/ice: not in enabled drivers build config 00:07:04.841 net/idpf: not in enabled drivers build config 00:07:04.841 net/igc: not in enabled drivers build config 00:07:04.841 net/ionic: not in enabled drivers build config 00:07:04.841 net/ipn3ke: not in enabled drivers build config 00:07:04.841 net/ixgbe: not in enabled drivers build config 00:07:04.841 net/mana: not in enabled drivers build config 00:07:04.841 net/memif: not in enabled drivers build config 00:07:04.841 net/mlx4: not in enabled drivers build config 00:07:04.841 net/mlx5: not in enabled drivers build config 00:07:04.841 net/mvneta: not in enabled drivers build config 00:07:04.841 net/mvpp2: not in enabled drivers build config 00:07:04.841 net/netvsc: not in enabled drivers build config 00:07:04.841 net/nfb: not in enabled drivers build config 00:07:04.841 net/nfp: not in enabled drivers build config 00:07:04.841 net/ngbe: not in enabled drivers build config 00:07:04.841 net/null: not in enabled drivers build config 00:07:04.841 net/octeontx: not in enabled drivers build config 00:07:04.841 net/octeon_ep: not in enabled drivers build config 00:07:04.841 net/pcap: not in enabled drivers build config 00:07:04.841 net/pfe: not in enabled drivers build config 00:07:04.841 net/qede: not in enabled drivers build config 00:07:04.841 net/ring: not in enabled drivers build config 00:07:04.841 net/sfc: not in enabled drivers build config 00:07:04.841 net/softnic: not in enabled drivers build config 00:07:04.841 net/tap: not in enabled drivers build config 00:07:04.841 net/thunderx: not in enabled drivers build config 00:07:04.841 net/txgbe: not in enabled drivers build config 00:07:04.841 net/vdev_netvsc: not in enabled drivers build config 00:07:04.841 net/vhost: not in enabled drivers build config 00:07:04.841 net/virtio: not in enabled drivers build config 00:07:04.841 net/vmxnet3: not in enabled drivers build config 00:07:04.841 raw/*: missing internal dependency, "rawdev" 00:07:04.841 crypto/armv8: not in enabled drivers build config 00:07:04.841 crypto/bcmfs: not in enabled drivers build config 00:07:04.841 crypto/caam_jr: not in enabled drivers build config 00:07:04.841 crypto/ccp: not in enabled drivers build config 00:07:04.841 crypto/cnxk: not in enabled drivers build config 00:07:04.841 crypto/dpaa_sec: not in enabled drivers build config 00:07:04.841 crypto/dpaa2_sec: not in enabled drivers build config 00:07:04.841 crypto/ipsec_mb: not in enabled drivers build config 00:07:04.841 crypto/mlx5: not in enabled drivers build config 00:07:04.841 crypto/mvsam: not in enabled drivers build config 00:07:04.841 crypto/nitrox: not in enabled drivers build config 00:07:04.841 crypto/null: not in enabled drivers build config 00:07:04.841 crypto/octeontx: not in enabled drivers build config 00:07:04.841 crypto/openssl: not in enabled drivers build config 00:07:04.841 crypto/scheduler: not in enabled drivers build config 00:07:04.841 crypto/uadk: not in enabled drivers build config 00:07:04.841 crypto/virtio: not in enabled drivers build config 00:07:04.841 compress/isal: not in enabled drivers build config 00:07:04.841 compress/mlx5: not in enabled drivers build config 00:07:04.841 compress/nitrox: not in enabled drivers build config 00:07:04.841 compress/octeontx: not in enabled drivers build config 00:07:04.841 compress/zlib: not in enabled drivers build config 00:07:04.841 regex/*: missing internal dependency, "regexdev" 00:07:04.841 ml/*: missing internal dependency, "mldev" 00:07:04.841 vdpa/ifc: not in enabled drivers build config 00:07:04.841 vdpa/mlx5: not in enabled drivers build config 00:07:04.841 vdpa/nfp: not in enabled drivers build config 00:07:04.841 vdpa/sfc: not in enabled drivers build config 00:07:04.841 event/*: missing internal dependency, "eventdev" 00:07:04.841 baseband/*: missing internal dependency, "bbdev" 00:07:04.841 gpu/*: missing internal dependency, "gpudev" 00:07:04.841 00:07:04.841 00:07:04.841 Build targets in project: 84 00:07:04.841 00:07:04.841 DPDK 24.03.0 00:07:04.841 00:07:04.841 User defined options 00:07:04.841 buildtype : debug 00:07:04.841 default_library : shared 00:07:04.841 libdir : lib 00:07:04.841 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:04.841 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:07:04.841 c_link_args : 00:07:04.841 cpu_instruction_set: native 00:07:04.841 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:07:04.841 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:07:04.841 enable_docs : false 00:07:04.841 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:07:04.841 enable_kmods : false 00:07:04.841 max_lcores : 128 00:07:04.841 tests : false 00:07:04.841 00:07:04.841 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:04.841 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:07:04.841 [1/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:04.841 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:04.841 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:04.841 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:04.841 [5/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:04.841 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:04.841 [7/267] Linking static target lib/librte_kvargs.a 00:07:04.841 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:04.841 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:04.841 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:04.841 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:05.101 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:05.101 [13/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:05.101 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:05.101 [15/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:05.101 [16/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:05.101 [17/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:05.101 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:05.101 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:05.101 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:05.101 [21/267] Linking static target lib/librte_log.a 00:07:05.101 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:05.101 [23/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:05.101 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:05.101 [25/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:07:05.101 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:05.101 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:05.101 [28/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:05.101 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:05.101 [30/267] Linking static target lib/librte_pci.a 00:07:05.101 [31/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:05.101 [32/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:05.101 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:05.101 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:05.101 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:05.101 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:05.361 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:05.361 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:05.361 [39/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:05.361 [40/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:05.361 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:05.361 [42/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:05.361 [43/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:05.361 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:05.361 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:05.361 [46/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:05.361 [47/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:05.361 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:05.361 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:05.361 [50/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:05.361 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:05.361 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:05.361 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:05.361 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:05.361 [55/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:05.361 [56/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:05.361 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:05.361 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:05.361 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:05.361 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:05.361 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:05.361 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:05.361 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:05.361 [64/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:05.361 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:05.361 [66/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:05.361 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:05.361 [68/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:05.361 [69/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:05.361 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:05.361 [71/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:05.361 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:05.361 [73/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:05.361 [74/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:07:05.361 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:05.361 [76/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:05.361 [77/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:05.361 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:05.361 [79/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:05.361 [80/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:05.361 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:05.361 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:07:05.361 [83/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:05.361 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:05.622 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:05.622 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:05.622 [87/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:05.622 [88/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:05.622 [89/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:05.622 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:05.622 [91/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:05.622 [92/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:05.622 [93/267] Linking static target lib/librte_telemetry.a 00:07:05.622 [94/267] Linking static target lib/librte_meter.a 00:07:05.622 [95/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:05.622 [96/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:05.622 [97/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:05.622 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:05.622 [99/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:05.622 [100/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:05.622 [101/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:05.622 [102/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:05.622 [103/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:05.622 [104/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:05.622 [105/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:05.622 [106/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:05.622 [107/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:05.622 [108/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:05.622 [109/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:07:05.622 [110/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:05.622 [111/267] Linking static target lib/librte_ring.a 00:07:05.622 [112/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:05.622 [113/267] Linking static target lib/librte_timer.a 00:07:05.622 [114/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:05.622 [115/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:05.622 [116/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:05.622 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:05.622 [118/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:05.622 [119/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:05.622 [120/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:05.622 [121/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:05.622 [122/267] Linking static target lib/librte_cmdline.a 00:07:05.622 [123/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:05.622 [124/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:05.622 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:05.623 [126/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:05.623 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:05.623 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:07:05.623 [129/267] Linking static target lib/librte_rcu.a 00:07:05.623 [130/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:05.623 [131/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:05.623 [132/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:05.623 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:05.623 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:05.623 [135/267] Linking static target lib/librte_dmadev.a 00:07:05.623 [136/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:05.623 [137/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:05.623 [138/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:05.623 [139/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:05.623 [140/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:05.623 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:05.623 [142/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:05.623 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:05.623 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:05.623 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:05.623 [146/267] Linking static target lib/librte_mempool.a 00:07:05.623 [147/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:05.623 [148/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:05.623 [149/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:05.623 [150/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:05.623 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:05.623 [152/267] Linking static target lib/librte_power.a 00:07:05.623 [153/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:05.623 [154/267] Linking static target lib/librte_net.a 00:07:05.623 [155/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:05.623 [156/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:05.623 [157/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:05.623 [158/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:05.623 [159/267] Linking static target lib/librte_security.a 00:07:05.623 [160/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:05.623 [161/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:05.623 [162/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:05.623 [163/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:05.623 [164/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:05.623 [165/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:05.623 [166/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:05.623 [167/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:05.623 [168/267] Linking static target lib/librte_compressdev.a 00:07:05.623 [169/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:05.623 [170/267] Linking static target lib/librte_eal.a 00:07:05.623 [171/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:05.623 [172/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:05.623 [173/267] Linking target lib/librte_log.so.24.1 00:07:05.623 [174/267] Linking static target lib/librte_reorder.a 00:07:05.623 [175/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:05.884 [176/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:05.884 [177/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:05.884 [178/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:05.884 [179/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:05.884 [180/267] Linking static target lib/librte_mbuf.a 00:07:05.884 [181/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:05.884 [182/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:05.884 [183/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:05.884 [184/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:05.884 [185/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:05.885 [186/267] Linking static target lib/librte_hash.a 00:07:05.885 [187/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:05.885 [188/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:07:05.885 [189/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:05.885 [190/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:05.885 [191/267] Linking static target drivers/librte_bus_vdev.a 00:07:05.885 [192/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:05.885 [193/267] Linking target lib/librte_kvargs.so.24.1 00:07:05.885 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:05.885 [195/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:05.885 [196/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:05.885 [197/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:05.885 [198/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:05.885 [199/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:06.146 [200/267] Linking static target drivers/librte_mempool_ring.a 00:07:06.146 [201/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:06.146 [202/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:06.146 [203/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:06.146 [204/267] Linking static target drivers/librte_bus_pci.a 00:07:06.146 [205/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:06.146 [206/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:06.146 [207/267] Linking static target lib/librte_cryptodev.a 00:07:06.146 [208/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:06.146 [209/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:07:06.146 [210/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:06.146 [211/267] Linking target lib/librte_telemetry.so.24.1 00:07:06.146 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:06.407 [213/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:07:06.407 [214/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:06.407 [215/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:06.407 [216/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:06.407 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:06.667 [218/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:06.667 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:06.667 [220/267] Linking static target lib/librte_ethdev.a 00:07:06.667 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:06.667 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:06.667 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:06.929 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:06.929 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:06.929 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:07.502 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:07.502 [228/267] Linking static target lib/librte_vhost.a 00:07:08.450 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:09.841 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:16.560 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:17.503 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:17.503 [233/267] Linking target lib/librte_eal.so.24.1 00:07:17.503 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:07:17.503 [235/267] Linking target lib/librte_ring.so.24.1 00:07:17.503 [236/267] Linking target lib/librte_timer.so.24.1 00:07:17.503 [237/267] Linking target lib/librte_pci.so.24.1 00:07:17.503 [238/267] Linking target lib/librte_meter.so.24.1 00:07:17.503 [239/267] Linking target lib/librte_dmadev.so.24.1 00:07:17.503 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:07:17.764 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:07:17.764 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:07:17.764 [243/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:07:17.764 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:07:17.764 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:07:17.764 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:07:17.764 [247/267] Linking target lib/librte_rcu.so.24.1 00:07:17.764 [248/267] Linking target lib/librte_mempool.so.24.1 00:07:17.764 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:07:18.025 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:07:18.025 [251/267] Linking target lib/librte_mbuf.so.24.1 00:07:18.025 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:07:18.025 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:07:18.025 [254/267] Linking target lib/librte_net.so.24.1 00:07:18.025 [255/267] Linking target lib/librte_compressdev.so.24.1 00:07:18.025 [256/267] Linking target lib/librte_reorder.so.24.1 00:07:18.025 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:07:18.287 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:07:18.287 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:07:18.287 [260/267] Linking target lib/librte_hash.so.24.1 00:07:18.287 [261/267] Linking target lib/librte_cmdline.so.24.1 00:07:18.287 [262/267] Linking target lib/librte_ethdev.so.24.1 00:07:18.287 [263/267] Linking target lib/librte_security.so.24.1 00:07:18.548 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:07:18.548 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:07:18.548 [266/267] Linking target lib/librte_power.so.24.1 00:07:18.548 [267/267] Linking target lib/librte_vhost.so.24.1 00:07:18.548 INFO: autodetecting backend as ninja 00:07:18.548 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:07:21.878 CC lib/log/log.o 00:07:21.878 CC lib/log/log_flags.o 00:07:21.878 CC lib/log/log_deprecated.o 00:07:21.878 CC lib/ut_mock/mock.o 00:07:21.878 CC lib/ut/ut.o 00:07:21.878 LIB libspdk_ut_mock.a 00:07:21.878 LIB libspdk_ut.a 00:07:21.878 LIB libspdk_log.a 00:07:21.878 SO libspdk_ut_mock.so.6.0 00:07:21.878 SO libspdk_ut.so.2.0 00:07:21.878 SO libspdk_log.so.7.1 00:07:21.878 SYMLINK libspdk_ut_mock.so 00:07:21.878 SYMLINK libspdk_ut.so 00:07:22.140 SYMLINK libspdk_log.so 00:07:22.400 CC lib/dma/dma.o 00:07:22.400 CXX lib/trace_parser/trace.o 00:07:22.400 CC lib/ioat/ioat.o 00:07:22.400 CC lib/util/base64.o 00:07:22.400 CC lib/util/bit_array.o 00:07:22.400 CC lib/util/cpuset.o 00:07:22.400 CC lib/util/crc16.o 00:07:22.400 CC lib/util/crc32.o 00:07:22.400 CC lib/util/crc32c.o 00:07:22.400 CC lib/util/crc32_ieee.o 00:07:22.400 CC lib/util/crc64.o 00:07:22.400 CC lib/util/dif.o 00:07:22.400 CC lib/util/fd.o 00:07:22.400 CC lib/util/fd_group.o 00:07:22.400 CC lib/util/file.o 00:07:22.400 CC lib/util/hexlify.o 00:07:22.400 CC lib/util/iov.o 00:07:22.400 CC lib/util/math.o 00:07:22.400 CC lib/util/net.o 00:07:22.400 CC lib/util/pipe.o 00:07:22.400 CC lib/util/strerror_tls.o 00:07:22.400 CC lib/util/string.o 00:07:22.400 CC lib/util/uuid.o 00:07:22.400 CC lib/util/xor.o 00:07:22.400 CC lib/util/zipf.o 00:07:22.400 CC lib/util/md5.o 00:07:22.661 CC lib/vfio_user/host/vfio_user_pci.o 00:07:22.661 CC lib/vfio_user/host/vfio_user.o 00:07:22.661 LIB libspdk_dma.a 00:07:22.661 SO libspdk_dma.so.5.0 00:07:22.661 LIB libspdk_ioat.a 00:07:22.661 SO libspdk_ioat.so.7.0 00:07:22.661 SYMLINK libspdk_dma.so 00:07:22.922 SYMLINK libspdk_ioat.so 00:07:22.922 LIB libspdk_vfio_user.a 00:07:22.922 SO libspdk_vfio_user.so.5.0 00:07:22.922 LIB libspdk_util.a 00:07:22.922 SYMLINK libspdk_vfio_user.so 00:07:22.922 SO libspdk_util.so.10.1 00:07:23.184 SYMLINK libspdk_util.so 00:07:23.184 LIB libspdk_trace_parser.a 00:07:23.184 SO libspdk_trace_parser.so.6.0 00:07:23.445 SYMLINK libspdk_trace_parser.so 00:07:23.445 CC lib/conf/conf.o 00:07:23.445 CC lib/json/json_parse.o 00:07:23.445 CC lib/json/json_util.o 00:07:23.445 CC lib/rdma_utils/rdma_utils.o 00:07:23.445 CC lib/json/json_write.o 00:07:23.445 CC lib/vmd/vmd.o 00:07:23.445 CC lib/vmd/led.o 00:07:23.445 CC lib/idxd/idxd.o 00:07:23.445 CC lib/idxd/idxd_user.o 00:07:23.445 CC lib/env_dpdk/env.o 00:07:23.445 CC lib/idxd/idxd_kernel.o 00:07:23.445 CC lib/env_dpdk/memory.o 00:07:23.445 CC lib/env_dpdk/pci.o 00:07:23.445 CC lib/env_dpdk/init.o 00:07:23.445 CC lib/env_dpdk/threads.o 00:07:23.445 CC lib/env_dpdk/pci_ioat.o 00:07:23.445 CC lib/env_dpdk/pci_virtio.o 00:07:23.445 CC lib/env_dpdk/pci_vmd.o 00:07:23.445 CC lib/env_dpdk/pci_idxd.o 00:07:23.445 CC lib/env_dpdk/pci_event.o 00:07:23.445 CC lib/env_dpdk/sigbus_handler.o 00:07:23.445 CC lib/env_dpdk/pci_dpdk.o 00:07:23.445 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:23.445 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:23.706 LIB libspdk_conf.a 00:07:23.706 SO libspdk_conf.so.6.0 00:07:23.706 LIB libspdk_rdma_utils.a 00:07:23.968 LIB libspdk_json.a 00:07:23.968 SO libspdk_rdma_utils.so.1.0 00:07:23.968 SYMLINK libspdk_conf.so 00:07:23.968 SO libspdk_json.so.6.0 00:07:23.968 SYMLINK libspdk_rdma_utils.so 00:07:23.968 SYMLINK libspdk_json.so 00:07:23.968 LIB libspdk_idxd.a 00:07:24.229 SO libspdk_idxd.so.12.1 00:07:24.229 LIB libspdk_vmd.a 00:07:24.229 SO libspdk_vmd.so.6.0 00:07:24.229 SYMLINK libspdk_idxd.so 00:07:24.229 SYMLINK libspdk_vmd.so 00:07:24.229 CC lib/rdma_provider/common.o 00:07:24.229 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:24.229 CC lib/jsonrpc/jsonrpc_server.o 00:07:24.229 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:24.229 CC lib/jsonrpc/jsonrpc_client.o 00:07:24.229 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:24.491 LIB libspdk_rdma_provider.a 00:07:24.491 LIB libspdk_jsonrpc.a 00:07:24.491 SO libspdk_rdma_provider.so.7.0 00:07:24.491 SO libspdk_jsonrpc.so.6.0 00:07:24.753 SYMLINK libspdk_rdma_provider.so 00:07:24.753 SYMLINK libspdk_jsonrpc.so 00:07:24.753 LIB libspdk_env_dpdk.a 00:07:24.753 SO libspdk_env_dpdk.so.15.1 00:07:25.015 SYMLINK libspdk_env_dpdk.so 00:07:25.015 CC lib/rpc/rpc.o 00:07:25.277 LIB libspdk_rpc.a 00:07:25.277 SO libspdk_rpc.so.6.0 00:07:25.277 SYMLINK libspdk_rpc.so 00:07:25.850 CC lib/notify/notify.o 00:07:25.850 CC lib/notify/notify_rpc.o 00:07:25.850 CC lib/keyring/keyring.o 00:07:25.850 CC lib/keyring/keyring_rpc.o 00:07:25.850 CC lib/trace/trace.o 00:07:25.850 CC lib/trace/trace_flags.o 00:07:25.850 CC lib/trace/trace_rpc.o 00:07:25.850 LIB libspdk_notify.a 00:07:25.850 SO libspdk_notify.so.6.0 00:07:26.112 LIB libspdk_keyring.a 00:07:26.112 LIB libspdk_trace.a 00:07:26.112 SO libspdk_keyring.so.2.0 00:07:26.112 SYMLINK libspdk_notify.so 00:07:26.112 SO libspdk_trace.so.11.0 00:07:26.112 SYMLINK libspdk_keyring.so 00:07:26.112 SYMLINK libspdk_trace.so 00:07:26.374 CC lib/thread/thread.o 00:07:26.374 CC lib/thread/iobuf.o 00:07:26.374 CC lib/sock/sock.o 00:07:26.374 CC lib/sock/sock_rpc.o 00:07:26.948 LIB libspdk_sock.a 00:07:26.948 SO libspdk_sock.so.10.0 00:07:26.948 SYMLINK libspdk_sock.so 00:07:27.521 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:27.521 CC lib/nvme/nvme_ctrlr.o 00:07:27.521 CC lib/nvme/nvme_fabric.o 00:07:27.521 CC lib/nvme/nvme_ns_cmd.o 00:07:27.521 CC lib/nvme/nvme_ns.o 00:07:27.521 CC lib/nvme/nvme_pcie_common.o 00:07:27.521 CC lib/nvme/nvme_pcie.o 00:07:27.521 CC lib/nvme/nvme_qpair.o 00:07:27.521 CC lib/nvme/nvme.o 00:07:27.521 CC lib/nvme/nvme_quirks.o 00:07:27.521 CC lib/nvme/nvme_transport.o 00:07:27.521 CC lib/nvme/nvme_discovery.o 00:07:27.521 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:27.521 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:27.521 CC lib/nvme/nvme_tcp.o 00:07:27.521 CC lib/nvme/nvme_opal.o 00:07:27.521 CC lib/nvme/nvme_io_msg.o 00:07:27.521 CC lib/nvme/nvme_poll_group.o 00:07:27.521 CC lib/nvme/nvme_zns.o 00:07:27.521 CC lib/nvme/nvme_stubs.o 00:07:27.521 CC lib/nvme/nvme_auth.o 00:07:27.521 CC lib/nvme/nvme_cuse.o 00:07:27.521 CC lib/nvme/nvme_vfio_user.o 00:07:27.521 CC lib/nvme/nvme_rdma.o 00:07:27.784 LIB libspdk_thread.a 00:07:27.784 SO libspdk_thread.so.11.0 00:07:28.045 SYMLINK libspdk_thread.so 00:07:28.307 CC lib/accel/accel.o 00:07:28.307 CC lib/blob/blobstore.o 00:07:28.307 CC lib/blob/request.o 00:07:28.307 CC lib/accel/accel_rpc.o 00:07:28.307 CC lib/fsdev/fsdev.o 00:07:28.307 CC lib/fsdev/fsdev_io.o 00:07:28.307 CC lib/blob/zeroes.o 00:07:28.307 CC lib/vfu_tgt/tgt_endpoint.o 00:07:28.307 CC lib/accel/accel_sw.o 00:07:28.307 CC lib/fsdev/fsdev_rpc.o 00:07:28.307 CC lib/blob/blob_bs_dev.o 00:07:28.307 CC lib/vfu_tgt/tgt_rpc.o 00:07:28.307 CC lib/virtio/virtio.o 00:07:28.307 CC lib/virtio/virtio_vhost_user.o 00:07:28.307 CC lib/virtio/virtio_vfio_user.o 00:07:28.307 CC lib/init/json_config.o 00:07:28.307 CC lib/virtio/virtio_pci.o 00:07:28.307 CC lib/init/subsystem.o 00:07:28.307 CC lib/init/subsystem_rpc.o 00:07:28.307 CC lib/init/rpc.o 00:07:28.569 LIB libspdk_init.a 00:07:28.831 SO libspdk_init.so.6.0 00:07:28.831 LIB libspdk_virtio.a 00:07:28.831 LIB libspdk_vfu_tgt.a 00:07:28.831 SO libspdk_virtio.so.7.0 00:07:28.831 SO libspdk_vfu_tgt.so.3.0 00:07:28.831 SYMLINK libspdk_init.so 00:07:28.831 SYMLINK libspdk_vfu_tgt.so 00:07:28.832 SYMLINK libspdk_virtio.so 00:07:29.093 LIB libspdk_fsdev.a 00:07:29.093 SO libspdk_fsdev.so.2.0 00:07:29.093 SYMLINK libspdk_fsdev.so 00:07:29.093 CC lib/event/app.o 00:07:29.093 CC lib/event/reactor.o 00:07:29.093 CC lib/event/log_rpc.o 00:07:29.093 CC lib/event/app_rpc.o 00:07:29.093 CC lib/event/scheduler_static.o 00:07:29.354 LIB libspdk_nvme.a 00:07:29.354 LIB libspdk_accel.a 00:07:29.354 SO libspdk_accel.so.16.0 00:07:29.354 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:29.354 SO libspdk_nvme.so.15.0 00:07:29.354 SYMLINK libspdk_accel.so 00:07:29.617 LIB libspdk_event.a 00:07:29.617 SO libspdk_event.so.14.0 00:07:29.617 SYMLINK libspdk_nvme.so 00:07:29.617 SYMLINK libspdk_event.so 00:07:29.879 CC lib/bdev/bdev.o 00:07:29.879 CC lib/bdev/bdev_rpc.o 00:07:29.879 CC lib/bdev/bdev_zone.o 00:07:29.879 CC lib/bdev/part.o 00:07:29.879 CC lib/bdev/scsi_nvme.o 00:07:30.140 LIB libspdk_fuse_dispatcher.a 00:07:30.140 SO libspdk_fuse_dispatcher.so.1.0 00:07:30.140 SYMLINK libspdk_fuse_dispatcher.so 00:07:31.083 LIB libspdk_blob.a 00:07:31.083 SO libspdk_blob.so.11.0 00:07:31.083 SYMLINK libspdk_blob.so 00:07:31.657 CC lib/blobfs/blobfs.o 00:07:31.657 CC lib/blobfs/tree.o 00:07:31.657 CC lib/lvol/lvol.o 00:07:32.229 LIB libspdk_bdev.a 00:07:32.229 SO libspdk_bdev.so.17.0 00:07:32.229 LIB libspdk_blobfs.a 00:07:32.229 SO libspdk_blobfs.so.10.0 00:07:32.229 SYMLINK libspdk_bdev.so 00:07:32.229 LIB libspdk_lvol.a 00:07:32.229 SO libspdk_lvol.so.10.0 00:07:32.229 SYMLINK libspdk_blobfs.so 00:07:32.490 SYMLINK libspdk_lvol.so 00:07:32.749 CC lib/nbd/nbd.o 00:07:32.749 CC lib/nvmf/ctrlr.o 00:07:32.749 CC lib/nbd/nbd_rpc.o 00:07:32.749 CC lib/nvmf/ctrlr_discovery.o 00:07:32.749 CC lib/scsi/dev.o 00:07:32.749 CC lib/nvmf/ctrlr_bdev.o 00:07:32.749 CC lib/scsi/lun.o 00:07:32.749 CC lib/nvmf/subsystem.o 00:07:32.749 CC lib/scsi/port.o 00:07:32.749 CC lib/nvmf/nvmf.o 00:07:32.749 CC lib/scsi/scsi.o 00:07:32.749 CC lib/nvmf/nvmf_rpc.o 00:07:32.749 CC lib/scsi/scsi_bdev.o 00:07:32.749 CC lib/ublk/ublk.o 00:07:32.749 CC lib/scsi/scsi_pr.o 00:07:32.749 CC lib/ublk/ublk_rpc.o 00:07:32.749 CC lib/nvmf/transport.o 00:07:32.749 CC lib/scsi/scsi_rpc.o 00:07:32.749 CC lib/ftl/ftl_core.o 00:07:32.749 CC lib/nvmf/tcp.o 00:07:32.749 CC lib/scsi/task.o 00:07:32.749 CC lib/ftl/ftl_init.o 00:07:32.749 CC lib/nvmf/stubs.o 00:07:32.749 CC lib/ftl/ftl_layout.o 00:07:32.749 CC lib/nvmf/mdns_server.o 00:07:32.749 CC lib/ftl/ftl_debug.o 00:07:32.749 CC lib/ftl/ftl_io.o 00:07:32.749 CC lib/nvmf/vfio_user.o 00:07:32.749 CC lib/ftl/ftl_sb.o 00:07:32.749 CC lib/nvmf/rdma.o 00:07:32.749 CC lib/ftl/ftl_l2p.o 00:07:32.749 CC lib/nvmf/auth.o 00:07:32.749 CC lib/ftl/ftl_l2p_flat.o 00:07:32.749 CC lib/ftl/ftl_nv_cache.o 00:07:32.749 CC lib/ftl/ftl_band.o 00:07:32.749 CC lib/ftl/ftl_band_ops.o 00:07:32.749 CC lib/ftl/ftl_writer.o 00:07:32.749 CC lib/ftl/ftl_rq.o 00:07:32.749 CC lib/ftl/ftl_reloc.o 00:07:32.749 CC lib/ftl/ftl_l2p_cache.o 00:07:32.749 CC lib/ftl/ftl_p2l.o 00:07:32.749 CC lib/ftl/ftl_p2l_log.o 00:07:32.749 CC lib/ftl/mngt/ftl_mngt.o 00:07:32.749 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:32.749 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:32.749 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:32.749 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:32.749 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:32.749 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:32.749 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:32.749 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:32.749 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:32.749 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:32.749 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:32.749 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:32.749 CC lib/ftl/utils/ftl_conf.o 00:07:32.749 CC lib/ftl/utils/ftl_md.o 00:07:32.749 CC lib/ftl/utils/ftl_mempool.o 00:07:32.749 CC lib/ftl/utils/ftl_bitmap.o 00:07:32.749 CC lib/ftl/utils/ftl_property.o 00:07:32.749 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:32.749 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:32.749 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:32.749 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:32.749 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:32.749 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:32.749 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:32.749 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:32.749 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:32.749 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:32.749 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:32.749 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:32.749 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:32.749 CC lib/ftl/base/ftl_base_dev.o 00:07:32.749 CC lib/ftl/base/ftl_base_bdev.o 00:07:32.749 CC lib/ftl/ftl_trace.o 00:07:33.321 LIB libspdk_nbd.a 00:07:33.321 SO libspdk_nbd.so.7.0 00:07:33.321 LIB libspdk_scsi.a 00:07:33.321 SYMLINK libspdk_nbd.so 00:07:33.321 SO libspdk_scsi.so.9.0 00:07:33.321 SYMLINK libspdk_scsi.so 00:07:33.321 LIB libspdk_ublk.a 00:07:33.582 SO libspdk_ublk.so.3.0 00:07:33.582 SYMLINK libspdk_ublk.so 00:07:33.582 LIB libspdk_ftl.a 00:07:33.843 CC lib/vhost/vhost.o 00:07:33.843 CC lib/vhost/vhost_rpc.o 00:07:33.843 CC lib/vhost/vhost_scsi.o 00:07:33.843 CC lib/vhost/vhost_blk.o 00:07:33.843 CC lib/vhost/rte_vhost_user.o 00:07:33.843 CC lib/iscsi/conn.o 00:07:33.843 CC lib/iscsi/init_grp.o 00:07:33.843 CC lib/iscsi/iscsi.o 00:07:33.843 CC lib/iscsi/param.o 00:07:33.843 CC lib/iscsi/portal_grp.o 00:07:33.843 CC lib/iscsi/tgt_node.o 00:07:33.843 CC lib/iscsi/iscsi_subsystem.o 00:07:33.843 CC lib/iscsi/iscsi_rpc.o 00:07:33.843 CC lib/iscsi/task.o 00:07:33.843 SO libspdk_ftl.so.9.0 00:07:34.104 SYMLINK libspdk_ftl.so 00:07:34.675 LIB libspdk_nvmf.a 00:07:34.675 SO libspdk_nvmf.so.20.0 00:07:34.675 LIB libspdk_vhost.a 00:07:34.675 SO libspdk_vhost.so.8.0 00:07:34.936 SYMLINK libspdk_nvmf.so 00:07:34.936 SYMLINK libspdk_vhost.so 00:07:34.936 LIB libspdk_iscsi.a 00:07:34.936 SO libspdk_iscsi.so.8.0 00:07:35.198 SYMLINK libspdk_iscsi.so 00:07:35.769 CC module/env_dpdk/env_dpdk_rpc.o 00:07:35.769 CC module/vfu_device/vfu_virtio.o 00:07:35.769 CC module/vfu_device/vfu_virtio_blk.o 00:07:35.769 CC module/vfu_device/vfu_virtio_scsi.o 00:07:35.769 CC module/vfu_device/vfu_virtio_fs.o 00:07:35.769 CC module/vfu_device/vfu_virtio_rpc.o 00:07:36.030 LIB libspdk_env_dpdk_rpc.a 00:07:36.030 CC module/accel/ioat/accel_ioat.o 00:07:36.030 CC module/accel/ioat/accel_ioat_rpc.o 00:07:36.030 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:36.030 CC module/accel/dsa/accel_dsa.o 00:07:36.030 CC module/accel/dsa/accel_dsa_rpc.o 00:07:36.030 CC module/sock/posix/posix.o 00:07:36.030 CC module/accel/error/accel_error.o 00:07:36.030 CC module/accel/error/accel_error_rpc.o 00:07:36.030 CC module/scheduler/gscheduler/gscheduler.o 00:07:36.030 CC module/blob/bdev/blob_bdev.o 00:07:36.030 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:36.030 CC module/keyring/linux/keyring.o 00:07:36.030 CC module/accel/iaa/accel_iaa.o 00:07:36.030 CC module/keyring/file/keyring_rpc.o 00:07:36.030 CC module/accel/iaa/accel_iaa_rpc.o 00:07:36.030 CC module/keyring/linux/keyring_rpc.o 00:07:36.030 CC module/keyring/file/keyring.o 00:07:36.030 CC module/fsdev/aio/fsdev_aio.o 00:07:36.030 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:36.030 CC module/fsdev/aio/linux_aio_mgr.o 00:07:36.030 SO libspdk_env_dpdk_rpc.so.6.0 00:07:36.030 SYMLINK libspdk_env_dpdk_rpc.so 00:07:36.291 LIB libspdk_scheduler_gscheduler.a 00:07:36.291 LIB libspdk_scheduler_dpdk_governor.a 00:07:36.291 LIB libspdk_accel_ioat.a 00:07:36.291 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:36.291 SO libspdk_scheduler_gscheduler.so.4.0 00:07:36.291 LIB libspdk_scheduler_dynamic.a 00:07:36.291 LIB libspdk_keyring_file.a 00:07:36.291 LIB libspdk_accel_error.a 00:07:36.291 LIB libspdk_keyring_linux.a 00:07:36.291 SO libspdk_accel_ioat.so.6.0 00:07:36.291 LIB libspdk_accel_iaa.a 00:07:36.291 SO libspdk_scheduler_dynamic.so.4.0 00:07:36.291 SO libspdk_keyring_file.so.2.0 00:07:36.291 SO libspdk_keyring_linux.so.1.0 00:07:36.291 SO libspdk_accel_error.so.2.0 00:07:36.291 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:36.291 LIB libspdk_blob_bdev.a 00:07:36.291 SYMLINK libspdk_scheduler_gscheduler.so 00:07:36.291 SO libspdk_accel_iaa.so.3.0 00:07:36.291 LIB libspdk_accel_dsa.a 00:07:36.291 SYMLINK libspdk_accel_ioat.so 00:07:36.291 SYMLINK libspdk_scheduler_dynamic.so 00:07:36.291 SO libspdk_accel_dsa.so.5.0 00:07:36.291 SO libspdk_blob_bdev.so.11.0 00:07:36.291 SYMLINK libspdk_keyring_linux.so 00:07:36.291 SYMLINK libspdk_keyring_file.so 00:07:36.291 SYMLINK libspdk_accel_error.so 00:07:36.291 SYMLINK libspdk_accel_iaa.so 00:07:36.291 SYMLINK libspdk_accel_dsa.so 00:07:36.291 SYMLINK libspdk_blob_bdev.so 00:07:36.291 LIB libspdk_vfu_device.a 00:07:36.552 SO libspdk_vfu_device.so.3.0 00:07:36.552 SYMLINK libspdk_vfu_device.so 00:07:36.552 LIB libspdk_fsdev_aio.a 00:07:36.813 SO libspdk_fsdev_aio.so.1.0 00:07:36.813 LIB libspdk_sock_posix.a 00:07:36.813 SO libspdk_sock_posix.so.6.0 00:07:36.813 SYMLINK libspdk_fsdev_aio.so 00:07:36.813 SYMLINK libspdk_sock_posix.so 00:07:37.072 CC module/bdev/gpt/gpt.o 00:07:37.072 CC module/bdev/gpt/vbdev_gpt.o 00:07:37.072 CC module/blobfs/bdev/blobfs_bdev.o 00:07:37.072 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:37.072 CC module/bdev/delay/vbdev_delay.o 00:07:37.072 CC module/bdev/error/vbdev_error.o 00:07:37.072 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:37.072 CC module/bdev/malloc/bdev_malloc.o 00:07:37.072 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:37.072 CC module/bdev/error/vbdev_error_rpc.o 00:07:37.072 CC module/bdev/lvol/vbdev_lvol.o 00:07:37.072 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:37.072 CC module/bdev/split/vbdev_split.o 00:07:37.072 CC module/bdev/split/vbdev_split_rpc.o 00:07:37.072 CC module/bdev/raid/bdev_raid.o 00:07:37.072 CC module/bdev/null/bdev_null.o 00:07:37.072 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:37.072 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:37.072 CC module/bdev/raid/bdev_raid_rpc.o 00:07:37.072 CC module/bdev/null/bdev_null_rpc.o 00:07:37.072 CC module/bdev/raid/bdev_raid_sb.o 00:07:37.072 CC module/bdev/raid/raid1.o 00:07:37.072 CC module/bdev/raid/raid0.o 00:07:37.072 CC module/bdev/raid/concat.o 00:07:37.072 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:37.072 CC module/bdev/passthru/vbdev_passthru.o 00:07:37.072 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:37.072 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:37.072 CC module/bdev/nvme/bdev_nvme.o 00:07:37.072 CC module/bdev/aio/bdev_aio.o 00:07:37.072 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:37.072 CC module/bdev/ftl/bdev_ftl.o 00:07:37.072 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:37.072 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:37.072 CC module/bdev/nvme/nvme_rpc.o 00:07:37.072 CC module/bdev/iscsi/bdev_iscsi.o 00:07:37.072 CC module/bdev/aio/bdev_aio_rpc.o 00:07:37.072 CC module/bdev/nvme/bdev_mdns_client.o 00:07:37.072 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:37.073 CC module/bdev/nvme/vbdev_opal.o 00:07:37.073 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:37.073 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:37.333 LIB libspdk_blobfs_bdev.a 00:07:37.333 SO libspdk_blobfs_bdev.so.6.0 00:07:37.333 LIB libspdk_bdev_gpt.a 00:07:37.333 LIB libspdk_bdev_split.a 00:07:37.333 LIB libspdk_bdev_null.a 00:07:37.333 SO libspdk_bdev_gpt.so.6.0 00:07:37.333 LIB libspdk_bdev_error.a 00:07:37.333 SO libspdk_bdev_split.so.6.0 00:07:37.333 SYMLINK libspdk_blobfs_bdev.so 00:07:37.333 SO libspdk_bdev_null.so.6.0 00:07:37.333 SO libspdk_bdev_error.so.6.0 00:07:37.333 LIB libspdk_bdev_ftl.a 00:07:37.333 LIB libspdk_bdev_passthru.a 00:07:37.333 SYMLINK libspdk_bdev_gpt.so 00:07:37.333 LIB libspdk_bdev_zone_block.a 00:07:37.333 LIB libspdk_bdev_malloc.a 00:07:37.333 SYMLINK libspdk_bdev_split.so 00:07:37.333 LIB libspdk_bdev_delay.a 00:07:37.333 SYMLINK libspdk_bdev_error.so 00:07:37.333 SYMLINK libspdk_bdev_null.so 00:07:37.333 LIB libspdk_bdev_aio.a 00:07:37.594 SO libspdk_bdev_ftl.so.6.0 00:07:37.594 SO libspdk_bdev_passthru.so.6.0 00:07:37.594 LIB libspdk_bdev_iscsi.a 00:07:37.594 SO libspdk_bdev_zone_block.so.6.0 00:07:37.594 SO libspdk_bdev_malloc.so.6.0 00:07:37.594 SO libspdk_bdev_delay.so.6.0 00:07:37.594 SO libspdk_bdev_aio.so.6.0 00:07:37.594 SO libspdk_bdev_iscsi.so.6.0 00:07:37.594 SYMLINK libspdk_bdev_ftl.so 00:07:37.594 SYMLINK libspdk_bdev_passthru.so 00:07:37.594 SYMLINK libspdk_bdev_zone_block.so 00:07:37.594 SYMLINK libspdk_bdev_malloc.so 00:07:37.594 SYMLINK libspdk_bdev_delay.so 00:07:37.594 SYMLINK libspdk_bdev_aio.so 00:07:37.594 SYMLINK libspdk_bdev_iscsi.so 00:07:37.594 LIB libspdk_bdev_lvol.a 00:07:37.594 LIB libspdk_bdev_virtio.a 00:07:37.594 SO libspdk_bdev_lvol.so.6.0 00:07:37.594 SO libspdk_bdev_virtio.so.6.0 00:07:37.594 SYMLINK libspdk_bdev_lvol.so 00:07:37.855 SYMLINK libspdk_bdev_virtio.so 00:07:38.116 LIB libspdk_bdev_raid.a 00:07:38.116 SO libspdk_bdev_raid.so.6.0 00:07:38.116 SYMLINK libspdk_bdev_raid.so 00:07:39.502 LIB libspdk_bdev_nvme.a 00:07:39.502 SO libspdk_bdev_nvme.so.7.1 00:07:39.502 SYMLINK libspdk_bdev_nvme.so 00:07:40.445 CC module/event/subsystems/iobuf/iobuf.o 00:07:40.445 CC module/event/subsystems/vmd/vmd.o 00:07:40.445 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:40.445 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:40.445 CC module/event/subsystems/fsdev/fsdev.o 00:07:40.445 CC module/event/subsystems/sock/sock.o 00:07:40.445 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:40.445 CC module/event/subsystems/keyring/keyring.o 00:07:40.445 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:07:40.445 CC module/event/subsystems/scheduler/scheduler.o 00:07:40.445 LIB libspdk_event_vfu_tgt.a 00:07:40.446 LIB libspdk_event_fsdev.a 00:07:40.446 LIB libspdk_event_keyring.a 00:07:40.446 LIB libspdk_event_vmd.a 00:07:40.446 LIB libspdk_event_vhost_blk.a 00:07:40.446 LIB libspdk_event_sock.a 00:07:40.446 LIB libspdk_event_iobuf.a 00:07:40.446 LIB libspdk_event_scheduler.a 00:07:40.446 SO libspdk_event_fsdev.so.1.0 00:07:40.446 SO libspdk_event_vhost_blk.so.3.0 00:07:40.446 SO libspdk_event_vfu_tgt.so.3.0 00:07:40.446 SO libspdk_event_keyring.so.1.0 00:07:40.446 SO libspdk_event_vmd.so.6.0 00:07:40.446 SO libspdk_event_sock.so.5.0 00:07:40.446 SO libspdk_event_scheduler.so.4.0 00:07:40.446 SO libspdk_event_iobuf.so.3.0 00:07:40.446 SYMLINK libspdk_event_fsdev.so 00:07:40.446 SYMLINK libspdk_event_vhost_blk.so 00:07:40.707 SYMLINK libspdk_event_vfu_tgt.so 00:07:40.707 SYMLINK libspdk_event_keyring.so 00:07:40.707 SYMLINK libspdk_event_sock.so 00:07:40.707 SYMLINK libspdk_event_vmd.so 00:07:40.707 SYMLINK libspdk_event_scheduler.so 00:07:40.707 SYMLINK libspdk_event_iobuf.so 00:07:40.968 CC module/event/subsystems/accel/accel.o 00:07:41.229 LIB libspdk_event_accel.a 00:07:41.229 SO libspdk_event_accel.so.6.0 00:07:41.229 SYMLINK libspdk_event_accel.so 00:07:41.490 CC module/event/subsystems/bdev/bdev.o 00:07:41.751 LIB libspdk_event_bdev.a 00:07:41.751 SO libspdk_event_bdev.so.6.0 00:07:42.012 SYMLINK libspdk_event_bdev.so 00:07:42.281 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:42.281 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:42.281 CC module/event/subsystems/ublk/ublk.o 00:07:42.281 CC module/event/subsystems/scsi/scsi.o 00:07:42.281 CC module/event/subsystems/nbd/nbd.o 00:07:42.542 LIB libspdk_event_ublk.a 00:07:42.542 LIB libspdk_event_nbd.a 00:07:42.542 LIB libspdk_event_scsi.a 00:07:42.542 SO libspdk_event_ublk.so.3.0 00:07:42.542 SO libspdk_event_nbd.so.6.0 00:07:42.542 SO libspdk_event_scsi.so.6.0 00:07:42.542 LIB libspdk_event_nvmf.a 00:07:42.542 SYMLINK libspdk_event_nbd.so 00:07:42.542 SYMLINK libspdk_event_ublk.so 00:07:42.542 SYMLINK libspdk_event_scsi.so 00:07:42.542 SO libspdk_event_nvmf.so.6.0 00:07:42.542 SYMLINK libspdk_event_nvmf.so 00:07:42.804 CC module/event/subsystems/iscsi/iscsi.o 00:07:42.804 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:43.066 LIB libspdk_event_vhost_scsi.a 00:07:43.066 SO libspdk_event_vhost_scsi.so.3.0 00:07:43.066 LIB libspdk_event_iscsi.a 00:07:43.066 SO libspdk_event_iscsi.so.6.0 00:07:43.066 SYMLINK libspdk_event_vhost_scsi.so 00:07:43.327 SYMLINK libspdk_event_iscsi.so 00:07:43.327 SO libspdk.so.6.0 00:07:43.327 SYMLINK libspdk.so 00:07:43.900 CXX app/trace/trace.o 00:07:43.900 CC app/spdk_lspci/spdk_lspci.o 00:07:43.900 TEST_HEADER include/spdk/accel.h 00:07:43.900 CC test/rpc_client/rpc_client_test.o 00:07:43.900 TEST_HEADER include/spdk/accel_module.h 00:07:43.900 TEST_HEADER include/spdk/barrier.h 00:07:43.900 TEST_HEADER include/spdk/assert.h 00:07:43.900 TEST_HEADER include/spdk/base64.h 00:07:43.900 TEST_HEADER include/spdk/bdev.h 00:07:43.900 CC app/trace_record/trace_record.o 00:07:43.900 TEST_HEADER include/spdk/bdev_module.h 00:07:43.900 TEST_HEADER include/spdk/bdev_zone.h 00:07:43.900 CC app/spdk_top/spdk_top.o 00:07:43.900 TEST_HEADER include/spdk/bit_array.h 00:07:43.900 CC app/spdk_nvme_perf/perf.o 00:07:43.900 TEST_HEADER include/spdk/bit_pool.h 00:07:43.900 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:43.900 TEST_HEADER include/spdk/blob_bdev.h 00:07:43.900 TEST_HEADER include/spdk/blob.h 00:07:43.900 TEST_HEADER include/spdk/blobfs.h 00:07:43.900 CC app/spdk_nvme_identify/identify.o 00:07:43.900 TEST_HEADER include/spdk/conf.h 00:07:43.900 TEST_HEADER include/spdk/config.h 00:07:43.900 CC app/spdk_nvme_discover/discovery_aer.o 00:07:43.900 TEST_HEADER include/spdk/cpuset.h 00:07:43.900 TEST_HEADER include/spdk/crc16.h 00:07:43.900 TEST_HEADER include/spdk/crc32.h 00:07:43.900 TEST_HEADER include/spdk/crc64.h 00:07:43.900 TEST_HEADER include/spdk/dif.h 00:07:43.900 TEST_HEADER include/spdk/dma.h 00:07:43.900 TEST_HEADER include/spdk/endian.h 00:07:43.900 TEST_HEADER include/spdk/env_dpdk.h 00:07:43.900 TEST_HEADER include/spdk/env.h 00:07:43.900 TEST_HEADER include/spdk/event.h 00:07:43.900 TEST_HEADER include/spdk/fd_group.h 00:07:43.900 TEST_HEADER include/spdk/file.h 00:07:43.900 TEST_HEADER include/spdk/fd.h 00:07:43.900 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:43.900 TEST_HEADER include/spdk/fsdev.h 00:07:43.900 TEST_HEADER include/spdk/fsdev_module.h 00:07:43.900 TEST_HEADER include/spdk/ftl.h 00:07:43.900 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:43.900 TEST_HEADER include/spdk/gpt_spec.h 00:07:43.900 TEST_HEADER include/spdk/histogram_data.h 00:07:43.900 TEST_HEADER include/spdk/hexlify.h 00:07:43.900 TEST_HEADER include/spdk/idxd_spec.h 00:07:43.900 TEST_HEADER include/spdk/idxd.h 00:07:43.900 TEST_HEADER include/spdk/init.h 00:07:43.900 TEST_HEADER include/spdk/ioat_spec.h 00:07:43.900 TEST_HEADER include/spdk/ioat.h 00:07:43.900 TEST_HEADER include/spdk/iscsi_spec.h 00:07:43.900 CC app/spdk_dd/spdk_dd.o 00:07:43.900 TEST_HEADER include/spdk/json.h 00:07:43.900 TEST_HEADER include/spdk/jsonrpc.h 00:07:43.900 CC app/nvmf_tgt/nvmf_main.o 00:07:43.900 TEST_HEADER include/spdk/keyring.h 00:07:43.900 TEST_HEADER include/spdk/keyring_module.h 00:07:43.900 TEST_HEADER include/spdk/likely.h 00:07:43.900 TEST_HEADER include/spdk/log.h 00:07:43.900 TEST_HEADER include/spdk/lvol.h 00:07:43.900 TEST_HEADER include/spdk/memory.h 00:07:43.900 TEST_HEADER include/spdk/md5.h 00:07:43.900 CC app/spdk_tgt/spdk_tgt.o 00:07:43.900 CC app/iscsi_tgt/iscsi_tgt.o 00:07:43.900 TEST_HEADER include/spdk/mmio.h 00:07:43.900 TEST_HEADER include/spdk/net.h 00:07:43.900 TEST_HEADER include/spdk/nbd.h 00:07:43.900 TEST_HEADER include/spdk/notify.h 00:07:43.900 TEST_HEADER include/spdk/nvme.h 00:07:43.900 TEST_HEADER include/spdk/nvme_intel.h 00:07:43.900 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:43.900 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:43.901 TEST_HEADER include/spdk/nvme_spec.h 00:07:43.901 TEST_HEADER include/spdk/nvme_zns.h 00:07:43.901 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:43.901 TEST_HEADER include/spdk/nvmf.h 00:07:43.901 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:43.901 TEST_HEADER include/spdk/nvmf_spec.h 00:07:43.901 TEST_HEADER include/spdk/nvmf_transport.h 00:07:43.901 TEST_HEADER include/spdk/opal.h 00:07:43.901 TEST_HEADER include/spdk/pci_ids.h 00:07:43.901 TEST_HEADER include/spdk/opal_spec.h 00:07:43.901 TEST_HEADER include/spdk/queue.h 00:07:43.901 TEST_HEADER include/spdk/pipe.h 00:07:43.901 TEST_HEADER include/spdk/rpc.h 00:07:43.901 TEST_HEADER include/spdk/scheduler.h 00:07:43.901 TEST_HEADER include/spdk/reduce.h 00:07:43.901 TEST_HEADER include/spdk/scsi.h 00:07:43.901 TEST_HEADER include/spdk/sock.h 00:07:43.901 TEST_HEADER include/spdk/scsi_spec.h 00:07:43.901 TEST_HEADER include/spdk/stdinc.h 00:07:43.901 TEST_HEADER include/spdk/string.h 00:07:43.901 TEST_HEADER include/spdk/trace.h 00:07:43.901 TEST_HEADER include/spdk/thread.h 00:07:43.901 TEST_HEADER include/spdk/tree.h 00:07:43.901 TEST_HEADER include/spdk/trace_parser.h 00:07:43.901 TEST_HEADER include/spdk/ublk.h 00:07:43.901 TEST_HEADER include/spdk/uuid.h 00:07:43.901 TEST_HEADER include/spdk/util.h 00:07:43.901 TEST_HEADER include/spdk/version.h 00:07:43.901 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:43.901 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:43.901 TEST_HEADER include/spdk/vhost.h 00:07:43.901 TEST_HEADER include/spdk/vmd.h 00:07:43.901 TEST_HEADER include/spdk/xor.h 00:07:43.901 TEST_HEADER include/spdk/zipf.h 00:07:43.901 CXX test/cpp_headers/accel.o 00:07:43.901 CXX test/cpp_headers/accel_module.o 00:07:43.901 CXX test/cpp_headers/assert.o 00:07:43.901 CXX test/cpp_headers/barrier.o 00:07:43.901 CXX test/cpp_headers/base64.o 00:07:43.901 CXX test/cpp_headers/bdev_module.o 00:07:43.901 CXX test/cpp_headers/bdev_zone.o 00:07:43.901 CXX test/cpp_headers/bdev.o 00:07:43.901 CXX test/cpp_headers/bit_array.o 00:07:43.901 CXX test/cpp_headers/bit_pool.o 00:07:43.901 CXX test/cpp_headers/blob_bdev.o 00:07:43.901 CXX test/cpp_headers/blobfs_bdev.o 00:07:43.901 CXX test/cpp_headers/blobfs.o 00:07:43.901 CXX test/cpp_headers/conf.o 00:07:43.901 CXX test/cpp_headers/blob.o 00:07:43.901 CXX test/cpp_headers/config.o 00:07:43.901 CXX test/cpp_headers/cpuset.o 00:07:43.901 CXX test/cpp_headers/crc16.o 00:07:43.901 CXX test/cpp_headers/crc32.o 00:07:43.901 CXX test/cpp_headers/crc64.o 00:07:43.901 CXX test/cpp_headers/dif.o 00:07:43.901 CXX test/cpp_headers/endian.o 00:07:43.901 CXX test/cpp_headers/dma.o 00:07:43.901 CXX test/cpp_headers/env_dpdk.o 00:07:43.901 CXX test/cpp_headers/env.o 00:07:43.901 CXX test/cpp_headers/event.o 00:07:43.901 CXX test/cpp_headers/fd.o 00:07:43.901 CXX test/cpp_headers/fd_group.o 00:07:43.901 CXX test/cpp_headers/file.o 00:07:43.901 CXX test/cpp_headers/fsdev_module.o 00:07:43.901 CXX test/cpp_headers/fsdev.o 00:07:43.901 CXX test/cpp_headers/ftl.o 00:07:43.901 CXX test/cpp_headers/fuse_dispatcher.o 00:07:43.901 CXX test/cpp_headers/gpt_spec.o 00:07:43.901 CXX test/cpp_headers/histogram_data.o 00:07:43.901 CXX test/cpp_headers/hexlify.o 00:07:43.901 CXX test/cpp_headers/idxd_spec.o 00:07:43.901 CXX test/cpp_headers/idxd.o 00:07:43.901 CXX test/cpp_headers/ioat.o 00:07:43.901 CXX test/cpp_headers/init.o 00:07:43.901 CXX test/cpp_headers/ioat_spec.o 00:07:43.901 CXX test/cpp_headers/json.o 00:07:43.901 CXX test/cpp_headers/iscsi_spec.o 00:07:43.901 CXX test/cpp_headers/keyring.o 00:07:43.901 CXX test/cpp_headers/keyring_module.o 00:07:43.901 CXX test/cpp_headers/jsonrpc.o 00:07:43.901 CXX test/cpp_headers/likely.o 00:07:43.901 CXX test/cpp_headers/lvol.o 00:07:43.901 CXX test/cpp_headers/log.o 00:07:43.901 CXX test/cpp_headers/memory.o 00:07:43.901 CXX test/cpp_headers/md5.o 00:07:44.169 CXX test/cpp_headers/nbd.o 00:07:44.169 CXX test/cpp_headers/mmio.o 00:07:44.169 CXX test/cpp_headers/nvme.o 00:07:44.169 CXX test/cpp_headers/nvme_intel.o 00:07:44.169 CXX test/cpp_headers/net.o 00:07:44.169 CXX test/cpp_headers/notify.o 00:07:44.169 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:44.169 CXX test/cpp_headers/nvme_ocssd.o 00:07:44.169 CXX test/cpp_headers/nvme_spec.o 00:07:44.169 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:44.169 CXX test/cpp_headers/nvme_zns.o 00:07:44.169 CXX test/cpp_headers/nvmf.o 00:07:44.169 CXX test/cpp_headers/nvmf_cmd.o 00:07:44.169 CXX test/cpp_headers/nvmf_spec.o 00:07:44.169 CXX test/cpp_headers/opal.o 00:07:44.169 CXX test/cpp_headers/nvmf_transport.o 00:07:44.169 CXX test/cpp_headers/pci_ids.o 00:07:44.169 CXX test/cpp_headers/opal_spec.o 00:07:44.169 CC examples/util/zipf/zipf.o 00:07:44.169 CXX test/cpp_headers/pipe.o 00:07:44.169 CXX test/cpp_headers/queue.o 00:07:44.169 CXX test/cpp_headers/reduce.o 00:07:44.169 CXX test/cpp_headers/rpc.o 00:07:44.169 CXX test/cpp_headers/scheduler.o 00:07:44.169 CXX test/cpp_headers/scsi.o 00:07:44.169 CXX test/cpp_headers/scsi_spec.o 00:07:44.169 CXX test/cpp_headers/sock.o 00:07:44.169 CXX test/cpp_headers/stdinc.o 00:07:44.169 CXX test/cpp_headers/string.o 00:07:44.169 CXX test/cpp_headers/trace.o 00:07:44.169 CXX test/cpp_headers/thread.o 00:07:44.169 CXX test/cpp_headers/tree.o 00:07:44.169 LINK spdk_lspci 00:07:44.169 CC test/thread/poller_perf/poller_perf.o 00:07:44.169 CXX test/cpp_headers/trace_parser.o 00:07:44.169 CXX test/cpp_headers/ublk.o 00:07:44.169 CXX test/cpp_headers/uuid.o 00:07:44.169 CXX test/cpp_headers/vfio_user_pci.o 00:07:44.169 CC examples/ioat/verify/verify.o 00:07:44.169 CXX test/cpp_headers/util.o 00:07:44.169 CXX test/cpp_headers/version.o 00:07:44.169 CC test/app/jsoncat/jsoncat.o 00:07:44.169 CXX test/cpp_headers/vfio_user_spec.o 00:07:44.169 CC app/fio/nvme/fio_plugin.o 00:07:44.169 CXX test/cpp_headers/vhost.o 00:07:44.169 CC examples/ioat/perf/perf.o 00:07:44.169 CXX test/cpp_headers/vmd.o 00:07:44.169 CC test/env/vtophys/vtophys.o 00:07:44.169 CXX test/cpp_headers/xor.o 00:07:44.169 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:44.169 CXX test/cpp_headers/zipf.o 00:07:44.169 CC test/app/stub/stub.o 00:07:44.169 CC test/env/pci/pci_ut.o 00:07:44.169 CC test/app/histogram_perf/histogram_perf.o 00:07:44.169 CC test/dma/test_dma/test_dma.o 00:07:44.169 CC test/env/memory/memory_ut.o 00:07:44.169 CC test/app/bdev_svc/bdev_svc.o 00:07:44.169 CC app/fio/bdev/fio_plugin.o 00:07:44.169 LINK rpc_client_test 00:07:44.441 LINK interrupt_tgt 00:07:44.441 LINK spdk_nvme_discover 00:07:44.704 LINK nvmf_tgt 00:07:44.704 LINK spdk_tgt 00:07:44.704 LINK spdk_trace_record 00:07:44.704 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:44.704 LINK jsoncat 00:07:44.704 CC test/env/mem_callbacks/mem_callbacks.o 00:07:44.967 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:44.967 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:44.967 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:44.967 LINK spdk_trace 00:07:44.967 LINK iscsi_tgt 00:07:44.967 LINK spdk_dd 00:07:44.967 LINK verify 00:07:44.967 LINK bdev_svc 00:07:45.233 LINK zipf 00:07:45.233 LINK poller_perf 00:07:45.233 LINK histogram_perf 00:07:45.233 LINK vtophys 00:07:45.233 LINK env_dpdk_post_init 00:07:45.233 LINK stub 00:07:45.233 LINK ioat_perf 00:07:45.495 CC app/vhost/vhost.o 00:07:45.495 LINK spdk_bdev 00:07:45.495 LINK nvme_fuzz 00:07:45.495 LINK pci_ut 00:07:45.495 LINK spdk_nvme_perf 00:07:45.495 LINK spdk_nvme 00:07:45.495 LINK test_dma 00:07:45.757 LINK spdk_nvme_identify 00:07:45.757 LINK vhost 00:07:45.757 LINK vhost_fuzz 00:07:45.757 CC examples/vmd/lsvmd/lsvmd.o 00:07:45.757 CC test/event/reactor_perf/reactor_perf.o 00:07:45.757 CC examples/idxd/perf/perf.o 00:07:45.757 CC test/event/event_perf/event_perf.o 00:07:45.757 CC examples/sock/hello_world/hello_sock.o 00:07:45.757 CC examples/vmd/led/led.o 00:07:45.757 CC test/event/reactor/reactor.o 00:07:45.757 CC examples/thread/thread/thread_ex.o 00:07:45.757 CC test/event/app_repeat/app_repeat.o 00:07:45.757 CC test/event/scheduler/scheduler.o 00:07:45.757 LINK spdk_top 00:07:45.757 LINK mem_callbacks 00:07:46.019 LINK reactor_perf 00:07:46.019 LINK lsvmd 00:07:46.019 LINK event_perf 00:07:46.019 LINK led 00:07:46.019 LINK reactor 00:07:46.019 LINK app_repeat 00:07:46.019 LINK hello_sock 00:07:46.019 LINK thread 00:07:46.019 LINK scheduler 00:07:46.019 LINK idxd_perf 00:07:46.280 CC test/nvme/e2edp/nvme_dp.o 00:07:46.280 CC test/nvme/fused_ordering/fused_ordering.o 00:07:46.280 CC test/nvme/err_injection/err_injection.o 00:07:46.280 CC test/nvme/boot_partition/boot_partition.o 00:07:46.280 CC test/nvme/aer/aer.o 00:07:46.280 CC test/nvme/sgl/sgl.o 00:07:46.280 CC test/nvme/simple_copy/simple_copy.o 00:07:46.280 CC test/nvme/reserve/reserve.o 00:07:46.280 CC test/nvme/reset/reset.o 00:07:46.280 CC test/nvme/overhead/overhead.o 00:07:46.280 CC test/nvme/startup/startup.o 00:07:46.280 CC test/nvme/cuse/cuse.o 00:07:46.280 CC test/nvme/connect_stress/connect_stress.o 00:07:46.280 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:46.280 CC test/nvme/compliance/nvme_compliance.o 00:07:46.280 CC test/nvme/fdp/fdp.o 00:07:46.280 CC test/blobfs/mkfs/mkfs.o 00:07:46.280 CC test/accel/dif/dif.o 00:07:46.280 LINK memory_ut 00:07:46.542 CC test/lvol/esnap/esnap.o 00:07:46.542 LINK startup 00:07:46.542 LINK boot_partition 00:07:46.542 LINK err_injection 00:07:46.542 LINK connect_stress 00:07:46.542 LINK fused_ordering 00:07:46.542 LINK reserve 00:07:46.542 LINK doorbell_aers 00:07:46.542 LINK nvme_dp 00:07:46.542 LINK mkfs 00:07:46.542 LINK simple_copy 00:07:46.542 CC examples/nvme/hotplug/hotplug.o 00:07:46.542 CC examples/nvme/arbitration/arbitration.o 00:07:46.542 CC examples/nvme/hello_world/hello_world.o 00:07:46.542 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:46.542 LINK sgl 00:07:46.542 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:46.542 CC examples/nvme/reconnect/reconnect.o 00:07:46.542 LINK iscsi_fuzz 00:07:46.542 LINK reset 00:07:46.542 CC examples/nvme/abort/abort.o 00:07:46.542 LINK aer 00:07:46.542 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:46.542 LINK overhead 00:07:46.542 LINK nvme_compliance 00:07:46.542 LINK fdp 00:07:46.803 CC examples/accel/perf/accel_perf.o 00:07:46.803 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:46.803 CC examples/blob/hello_world/hello_blob.o 00:07:46.803 CC examples/blob/cli/blobcli.o 00:07:46.803 LINK pmr_persistence 00:07:46.803 LINK cmb_copy 00:07:46.803 LINK hotplug 00:07:46.803 LINK hello_world 00:07:46.803 LINK arbitration 00:07:46.803 LINK dif 00:07:47.064 LINK reconnect 00:07:47.064 LINK abort 00:07:47.064 LINK hello_blob 00:07:47.064 LINK nvme_manage 00:07:47.064 LINK hello_fsdev 00:07:47.064 LINK accel_perf 00:07:47.326 LINK blobcli 00:07:47.588 LINK cuse 00:07:47.588 CC test/bdev/bdevio/bdevio.o 00:07:47.850 CC examples/bdev/hello_world/hello_bdev.o 00:07:47.850 CC examples/bdev/bdevperf/bdevperf.o 00:07:47.850 LINK bdevio 00:07:48.112 LINK hello_bdev 00:07:48.684 LINK bdevperf 00:07:49.257 CC examples/nvmf/nvmf/nvmf.o 00:07:49.518 LINK nvmf 00:07:50.901 LINK esnap 00:07:51.162 00:07:51.162 real 0m56.121s 00:07:51.162 user 8m6.911s 00:07:51.162 sys 5m28.868s 00:07:51.162 14:05:56 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:51.162 14:05:56 make -- common/autotest_common.sh@10 -- $ set +x 00:07:51.162 ************************************ 00:07:51.162 END TEST make 00:07:51.162 ************************************ 00:07:51.162 14:05:56 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:51.162 14:05:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:51.162 14:05:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:51.162 14:05:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:51.163 14:05:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:07:51.163 14:05:56 -- pm/common@44 -- $ pid=3106115 00:07:51.163 14:05:56 -- pm/common@50 -- $ kill -TERM 3106115 00:07:51.163 14:05:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:51.163 14:05:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:07:51.163 14:05:56 -- pm/common@44 -- $ pid=3106116 00:07:51.163 14:05:56 -- pm/common@50 -- $ kill -TERM 3106116 00:07:51.163 14:05:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:51.163 14:05:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:07:51.163 14:05:56 -- pm/common@44 -- $ pid=3106118 00:07:51.163 14:05:56 -- pm/common@50 -- $ kill -TERM 3106118 00:07:51.163 14:05:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:51.163 14:05:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:07:51.163 14:05:56 -- pm/common@44 -- $ pid=3106143 00:07:51.163 14:05:56 -- pm/common@50 -- $ sudo -E kill -TERM 3106143 00:07:51.425 14:05:56 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:07:51.425 14:05:56 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:07:51.425 14:05:56 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:51.425 14:05:56 -- common/autotest_common.sh@1693 -- # lcov --version 00:07:51.425 14:05:56 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:51.425 14:05:56 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:51.425 14:05:56 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.425 14:05:56 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.425 14:05:56 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.425 14:05:56 -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.425 14:05:56 -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.425 14:05:56 -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.425 14:05:56 -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.425 14:05:56 -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.425 14:05:56 -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.425 14:05:56 -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.425 14:05:56 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.425 14:05:56 -- scripts/common.sh@344 -- # case "$op" in 00:07:51.425 14:05:56 -- scripts/common.sh@345 -- # : 1 00:07:51.425 14:05:56 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.425 14:05:56 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.425 14:05:56 -- scripts/common.sh@365 -- # decimal 1 00:07:51.425 14:05:56 -- scripts/common.sh@353 -- # local d=1 00:07:51.425 14:05:56 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.425 14:05:56 -- scripts/common.sh@355 -- # echo 1 00:07:51.425 14:05:56 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.425 14:05:56 -- scripts/common.sh@366 -- # decimal 2 00:07:51.425 14:05:56 -- scripts/common.sh@353 -- # local d=2 00:07:51.425 14:05:56 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.425 14:05:56 -- scripts/common.sh@355 -- # echo 2 00:07:51.425 14:05:56 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.425 14:05:56 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.425 14:05:56 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.425 14:05:56 -- scripts/common.sh@368 -- # return 0 00:07:51.425 14:05:56 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.425 14:05:56 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:51.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.425 --rc genhtml_branch_coverage=1 00:07:51.425 --rc genhtml_function_coverage=1 00:07:51.425 --rc genhtml_legend=1 00:07:51.425 --rc geninfo_all_blocks=1 00:07:51.425 --rc geninfo_unexecuted_blocks=1 00:07:51.425 00:07:51.425 ' 00:07:51.425 14:05:56 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:51.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.425 --rc genhtml_branch_coverage=1 00:07:51.425 --rc genhtml_function_coverage=1 00:07:51.425 --rc genhtml_legend=1 00:07:51.425 --rc geninfo_all_blocks=1 00:07:51.425 --rc geninfo_unexecuted_blocks=1 00:07:51.425 00:07:51.425 ' 00:07:51.425 14:05:56 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:51.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.425 --rc genhtml_branch_coverage=1 00:07:51.425 --rc genhtml_function_coverage=1 00:07:51.425 --rc genhtml_legend=1 00:07:51.425 --rc geninfo_all_blocks=1 00:07:51.425 --rc geninfo_unexecuted_blocks=1 00:07:51.425 00:07:51.425 ' 00:07:51.425 14:05:56 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:51.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.425 --rc genhtml_branch_coverage=1 00:07:51.425 --rc genhtml_function_coverage=1 00:07:51.425 --rc genhtml_legend=1 00:07:51.425 --rc geninfo_all_blocks=1 00:07:51.425 --rc geninfo_unexecuted_blocks=1 00:07:51.425 00:07:51.425 ' 00:07:51.425 14:05:56 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.425 14:05:56 -- nvmf/common.sh@7 -- # uname -s 00:07:51.425 14:05:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.425 14:05:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.425 14:05:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.425 14:05:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.425 14:05:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.425 14:05:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.425 14:05:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.425 14:05:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.425 14:05:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.425 14:05:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.425 14:05:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:51.425 14:05:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:51.425 14:05:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.425 14:05:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.425 14:05:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.425 14:05:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.425 14:05:56 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.425 14:05:56 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:51.425 14:05:56 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.425 14:05:56 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.425 14:05:56 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.425 14:05:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.425 14:05:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.425 14:05:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.425 14:05:56 -- paths/export.sh@5 -- # export PATH 00:07:51.425 14:05:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.425 14:05:56 -- nvmf/common.sh@51 -- # : 0 00:07:51.425 14:05:56 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:51.425 14:05:56 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:51.425 14:05:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.425 14:05:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.425 14:05:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.425 14:05:56 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:51.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:51.426 14:05:56 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:51.426 14:05:56 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:51.426 14:05:56 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:51.426 14:05:56 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:51.426 14:05:56 -- spdk/autotest.sh@32 -- # uname -s 00:07:51.426 14:05:56 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:51.426 14:05:56 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:51.426 14:05:56 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:07:51.426 14:05:56 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:07:51.426 14:05:56 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:07:51.426 14:05:56 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:51.686 14:05:56 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:51.687 14:05:56 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:51.687 14:05:56 -- spdk/autotest.sh@48 -- # udevadm_pid=3171666 00:07:51.687 14:05:56 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:51.687 14:05:56 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:51.687 14:05:56 -- pm/common@17 -- # local monitor 00:07:51.687 14:05:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:51.687 14:05:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:51.687 14:05:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:51.687 14:05:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:51.687 14:05:56 -- pm/common@21 -- # date +%s 00:07:51.687 14:05:56 -- pm/common@25 -- # sleep 1 00:07:51.687 14:05:56 -- pm/common@21 -- # date +%s 00:07:51.687 14:05:56 -- pm/common@21 -- # date +%s 00:07:51.687 14:05:56 -- pm/common@21 -- # date +%s 00:07:51.687 14:05:56 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732539956 00:07:51.687 14:05:56 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732539956 00:07:51.687 14:05:56 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732539956 00:07:51.687 14:05:56 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732539956 00:07:51.687 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732539956_collect-cpu-load.pm.log 00:07:51.687 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732539956_collect-vmstat.pm.log 00:07:51.687 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732539956_collect-cpu-temp.pm.log 00:07:51.687 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732539956_collect-bmc-pm.bmc.pm.log 00:07:52.630 14:05:57 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:52.630 14:05:57 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:52.630 14:05:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.630 14:05:57 -- common/autotest_common.sh@10 -- # set +x 00:07:52.630 14:05:57 -- spdk/autotest.sh@59 -- # create_test_list 00:07:52.630 14:05:57 -- common/autotest_common.sh@752 -- # xtrace_disable 00:07:52.630 14:05:57 -- common/autotest_common.sh@10 -- # set +x 00:07:52.630 14:05:57 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:07:52.630 14:05:57 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:52.630 14:05:57 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:52.630 14:05:57 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:07:52.630 14:05:57 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:52.630 14:05:57 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:52.630 14:05:57 -- common/autotest_common.sh@1457 -- # uname 00:07:52.630 14:05:57 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:07:52.630 14:05:57 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:52.630 14:05:57 -- common/autotest_common.sh@1477 -- # uname 00:07:52.630 14:05:57 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:07:52.630 14:05:57 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:52.630 14:05:57 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:52.630 lcov: LCOV version 1.15 00:07:52.630 14:05:57 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:08:07.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:08:07.615 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:08:25.774 14:06:28 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:25.774 14:06:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:25.774 14:06:28 -- common/autotest_common.sh@10 -- # set +x 00:08:25.774 14:06:28 -- spdk/autotest.sh@78 -- # rm -f 00:08:25.774 14:06:28 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:08:26.715 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:08:26.715 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:08:26.715 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:08:26.715 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:08:26.976 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:08:26.976 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:08:26.976 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:08:26.976 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:08:26.976 0000:65:00.0 (144d a80a): Already using the nvme driver 00:08:26.976 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:08:26.976 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:08:26.976 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:08:26.976 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:08:26.976 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:08:27.237 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:08:27.237 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:08:27.237 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:08:27.499 14:06:32 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:27.499 14:06:32 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:27.499 14:06:32 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:27.499 14:06:32 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:08:27.499 14:06:32 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:08:27.499 14:06:32 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:08:27.499 14:06:32 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:27.499 14:06:32 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:27.499 14:06:32 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:27.499 14:06:32 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:27.499 14:06:32 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:27.499 14:06:32 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:27.499 14:06:32 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:27.499 14:06:32 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:27.499 14:06:32 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:27.499 No valid GPT data, bailing 00:08:27.499 14:06:32 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:27.499 14:06:32 -- scripts/common.sh@394 -- # pt= 00:08:27.499 14:06:32 -- scripts/common.sh@395 -- # return 1 00:08:27.499 14:06:32 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:27.499 1+0 records in 00:08:27.499 1+0 records out 00:08:27.499 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00523845 s, 200 MB/s 00:08:27.499 14:06:32 -- spdk/autotest.sh@105 -- # sync 00:08:27.499 14:06:32 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:27.499 14:06:32 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:27.499 14:06:32 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:37.502 14:06:41 -- spdk/autotest.sh@111 -- # uname -s 00:08:37.502 14:06:41 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:37.502 14:06:41 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:37.502 14:06:41 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:08:40.053 Hugepages 00:08:40.053 node hugesize free / total 00:08:40.053 node0 1048576kB 0 / 0 00:08:40.053 node0 2048kB 0 / 0 00:08:40.053 node1 1048576kB 0 / 0 00:08:40.053 node1 2048kB 0 / 0 00:08:40.053 00:08:40.053 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:40.053 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:08:40.053 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:08:40.053 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:08:40.053 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:08:40.053 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:08:40.053 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:08:40.053 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:08:40.053 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:08:40.053 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:08:40.053 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:08:40.053 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:08:40.053 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:08:40.053 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:08:40.053 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:08:40.053 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:08:40.053 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:08:40.053 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:08:40.053 14:06:44 -- spdk/autotest.sh@117 -- # uname -s 00:08:40.053 14:06:44 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:40.053 14:06:44 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:40.053 14:06:44 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:43.356 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:08:43.356 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:08:43.356 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:08:43.356 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:08:43.356 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:08:43.356 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:08:43.356 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:08:43.356 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:08:43.356 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:08:43.356 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:08:43.356 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:08:43.356 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:08:43.356 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:08:43.356 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:08:43.356 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:08:43.356 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:08:45.280 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:08:45.540 14:06:50 -- common/autotest_common.sh@1517 -- # sleep 1 00:08:46.482 14:06:51 -- common/autotest_common.sh@1518 -- # bdfs=() 00:08:46.482 14:06:51 -- common/autotest_common.sh@1518 -- # local bdfs 00:08:46.482 14:06:51 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:08:46.482 14:06:51 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:08:46.482 14:06:51 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:46.482 14:06:51 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:46.482 14:06:51 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:46.482 14:06:51 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:46.482 14:06:51 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:46.744 14:06:51 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:08:46.744 14:06:51 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:08:46.744 14:06:51 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:08:50.047 Waiting for block devices as requested 00:08:50.047 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:08:50.309 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:08:50.309 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:08:50.309 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:08:50.570 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:08:50.571 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:08:50.571 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:08:50.571 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:08:50.832 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:08:51.093 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:08:51.093 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:08:51.093 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:08:51.353 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:08:51.353 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:08:51.353 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:08:51.353 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:08:51.613 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:08:51.875 14:06:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:51.875 14:06:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:08:51.875 14:06:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:08:51.875 14:06:56 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:08:51.875 14:06:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:08:51.875 14:06:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:08:51.875 14:06:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:08:51.875 14:06:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:08:51.875 14:06:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:08:51.875 14:06:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:08:51.875 14:06:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:08:51.875 14:06:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:51.875 14:06:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:51.875 14:06:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:08:51.875 14:06:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:51.875 14:06:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:51.875 14:06:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:08:51.875 14:06:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:51.875 14:06:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:51.875 14:06:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:51.875 14:06:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:51.875 14:06:56 -- common/autotest_common.sh@1543 -- # continue 00:08:51.875 14:06:56 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:51.875 14:06:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:51.875 14:06:56 -- common/autotest_common.sh@10 -- # set +x 00:08:51.875 14:06:56 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:51.875 14:06:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:51.875 14:06:56 -- common/autotest_common.sh@10 -- # set +x 00:08:51.875 14:06:56 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:56.086 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:08:56.086 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:08:56.086 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:08:56.086 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:08:56.086 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:08:56.086 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:08:56.086 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:08:56.086 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:08:56.086 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:08:56.086 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:08:56.086 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:08:56.086 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:08:56.086 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:08:56.086 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:08:56.086 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:08:56.086 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:08:56.086 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:08:56.086 14:07:00 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:56.086 14:07:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:56.086 14:07:00 -- common/autotest_common.sh@10 -- # set +x 00:08:56.086 14:07:00 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:56.086 14:07:00 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:08:56.086 14:07:00 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:08:56.086 14:07:00 -- common/autotest_common.sh@1563 -- # bdfs=() 00:08:56.086 14:07:00 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:08:56.086 14:07:00 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:08:56.086 14:07:00 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:08:56.086 14:07:01 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:08:56.086 14:07:01 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:56.086 14:07:01 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:56.086 14:07:01 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:56.086 14:07:01 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:56.087 14:07:01 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:56.087 14:07:01 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:08:56.087 14:07:01 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:08:56.087 14:07:01 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:56.087 14:07:01 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:08:56.087 14:07:01 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:08:56.087 14:07:01 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:08:56.087 14:07:01 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:08:56.087 14:07:01 -- common/autotest_common.sh@1572 -- # return 0 00:08:56.087 14:07:01 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:08:56.087 14:07:01 -- common/autotest_common.sh@1580 -- # return 0 00:08:56.087 14:07:01 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:56.087 14:07:01 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:56.087 14:07:01 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:56.087 14:07:01 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:56.087 14:07:01 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:56.087 14:07:01 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:56.087 14:07:01 -- common/autotest_common.sh@10 -- # set +x 00:08:56.087 14:07:01 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:56.087 14:07:01 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:56.087 14:07:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.087 14:07:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.087 14:07:01 -- common/autotest_common.sh@10 -- # set +x 00:08:56.087 ************************************ 00:08:56.087 START TEST env 00:08:56.087 ************************************ 00:08:56.087 14:07:01 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:56.349 * Looking for test storage... 00:08:56.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:08:56.349 14:07:01 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:56.349 14:07:01 env -- common/autotest_common.sh@1693 -- # lcov --version 00:08:56.349 14:07:01 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:56.349 14:07:01 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:56.349 14:07:01 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.349 14:07:01 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.349 14:07:01 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.349 14:07:01 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.349 14:07:01 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.349 14:07:01 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.349 14:07:01 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.349 14:07:01 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.349 14:07:01 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.349 14:07:01 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.349 14:07:01 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.349 14:07:01 env -- scripts/common.sh@344 -- # case "$op" in 00:08:56.349 14:07:01 env -- scripts/common.sh@345 -- # : 1 00:08:56.349 14:07:01 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.349 14:07:01 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.349 14:07:01 env -- scripts/common.sh@365 -- # decimal 1 00:08:56.349 14:07:01 env -- scripts/common.sh@353 -- # local d=1 00:08:56.349 14:07:01 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.349 14:07:01 env -- scripts/common.sh@355 -- # echo 1 00:08:56.349 14:07:01 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.349 14:07:01 env -- scripts/common.sh@366 -- # decimal 2 00:08:56.349 14:07:01 env -- scripts/common.sh@353 -- # local d=2 00:08:56.349 14:07:01 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.349 14:07:01 env -- scripts/common.sh@355 -- # echo 2 00:08:56.349 14:07:01 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.349 14:07:01 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.349 14:07:01 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.349 14:07:01 env -- scripts/common.sh@368 -- # return 0 00:08:56.349 14:07:01 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.349 14:07:01 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:56.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.349 --rc genhtml_branch_coverage=1 00:08:56.349 --rc genhtml_function_coverage=1 00:08:56.349 --rc genhtml_legend=1 00:08:56.349 --rc geninfo_all_blocks=1 00:08:56.349 --rc geninfo_unexecuted_blocks=1 00:08:56.349 00:08:56.349 ' 00:08:56.349 14:07:01 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:56.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.349 --rc genhtml_branch_coverage=1 00:08:56.349 --rc genhtml_function_coverage=1 00:08:56.349 --rc genhtml_legend=1 00:08:56.349 --rc geninfo_all_blocks=1 00:08:56.349 --rc geninfo_unexecuted_blocks=1 00:08:56.349 00:08:56.349 ' 00:08:56.349 14:07:01 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:56.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.349 --rc genhtml_branch_coverage=1 00:08:56.349 --rc genhtml_function_coverage=1 00:08:56.349 --rc genhtml_legend=1 00:08:56.349 --rc geninfo_all_blocks=1 00:08:56.349 --rc geninfo_unexecuted_blocks=1 00:08:56.349 00:08:56.349 ' 00:08:56.349 14:07:01 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:56.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.349 --rc genhtml_branch_coverage=1 00:08:56.349 --rc genhtml_function_coverage=1 00:08:56.349 --rc genhtml_legend=1 00:08:56.349 --rc geninfo_all_blocks=1 00:08:56.349 --rc geninfo_unexecuted_blocks=1 00:08:56.349 00:08:56.349 ' 00:08:56.349 14:07:01 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:56.349 14:07:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.349 14:07:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.349 14:07:01 env -- common/autotest_common.sh@10 -- # set +x 00:08:56.349 ************************************ 00:08:56.349 START TEST env_memory 00:08:56.349 ************************************ 00:08:56.349 14:07:01 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:56.349 00:08:56.349 00:08:56.349 CUnit - A unit testing framework for C - Version 2.1-3 00:08:56.349 http://cunit.sourceforge.net/ 00:08:56.349 00:08:56.349 00:08:56.349 Suite: memory 00:08:56.611 Test: alloc and free memory map ...[2024-11-25 14:07:01.452450] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:56.612 passed 00:08:56.612 Test: mem map translation ...[2024-11-25 14:07:01.478111] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:56.612 [2024-11-25 14:07:01.478164] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:56.612 [2024-11-25 14:07:01.478212] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:56.612 [2024-11-25 14:07:01.478220] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:56.612 passed 00:08:56.612 Test: mem map registration ...[2024-11-25 14:07:01.533409] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:56.612 [2024-11-25 14:07:01.533433] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:56.612 passed 00:08:56.612 Test: mem map adjacent registrations ...passed 00:08:56.612 00:08:56.612 Run Summary: Type Total Ran Passed Failed Inactive 00:08:56.612 suites 1 1 n/a 0 0 00:08:56.612 tests 4 4 4 0 0 00:08:56.612 asserts 152 152 152 0 n/a 00:08:56.612 00:08:56.612 Elapsed time = 0.192 seconds 00:08:56.612 00:08:56.612 real 0m0.207s 00:08:56.612 user 0m0.193s 00:08:56.612 sys 0m0.013s 00:08:56.612 14:07:01 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.612 14:07:01 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:56.612 ************************************ 00:08:56.612 END TEST env_memory 00:08:56.612 ************************************ 00:08:56.612 14:07:01 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:56.612 14:07:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:56.612 14:07:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.612 14:07:01 env -- common/autotest_common.sh@10 -- # set +x 00:08:56.612 ************************************ 00:08:56.612 START TEST env_vtophys 00:08:56.612 ************************************ 00:08:56.612 14:07:01 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:56.874 EAL: lib.eal log level changed from notice to debug 00:08:56.874 EAL: Detected lcore 0 as core 0 on socket 0 00:08:56.874 EAL: Detected lcore 1 as core 1 on socket 0 00:08:56.874 EAL: Detected lcore 2 as core 2 on socket 0 00:08:56.874 EAL: Detected lcore 3 as core 3 on socket 0 00:08:56.874 EAL: Detected lcore 4 as core 4 on socket 0 00:08:56.874 EAL: Detected lcore 5 as core 5 on socket 0 00:08:56.874 EAL: Detected lcore 6 as core 6 on socket 0 00:08:56.874 EAL: Detected lcore 7 as core 7 on socket 0 00:08:56.874 EAL: Detected lcore 8 as core 8 on socket 0 00:08:56.874 EAL: Detected lcore 9 as core 9 on socket 0 00:08:56.874 EAL: Detected lcore 10 as core 10 on socket 0 00:08:56.874 EAL: Detected lcore 11 as core 11 on socket 0 00:08:56.874 EAL: Detected lcore 12 as core 12 on socket 0 00:08:56.874 EAL: Detected lcore 13 as core 13 on socket 0 00:08:56.874 EAL: Detected lcore 14 as core 14 on socket 0 00:08:56.874 EAL: Detected lcore 15 as core 15 on socket 0 00:08:56.874 EAL: Detected lcore 16 as core 16 on socket 0 00:08:56.874 EAL: Detected lcore 17 as core 17 on socket 0 00:08:56.874 EAL: Detected lcore 18 as core 18 on socket 0 00:08:56.874 EAL: Detected lcore 19 as core 19 on socket 0 00:08:56.874 EAL: Detected lcore 20 as core 20 on socket 0 00:08:56.874 EAL: Detected lcore 21 as core 21 on socket 0 00:08:56.874 EAL: Detected lcore 22 as core 22 on socket 0 00:08:56.874 EAL: Detected lcore 23 as core 23 on socket 0 00:08:56.874 EAL: Detected lcore 24 as core 24 on socket 0 00:08:56.874 EAL: Detected lcore 25 as core 25 on socket 0 00:08:56.874 EAL: Detected lcore 26 as core 26 on socket 0 00:08:56.874 EAL: Detected lcore 27 as core 27 on socket 0 00:08:56.874 EAL: Detected lcore 28 as core 28 on socket 0 00:08:56.874 EAL: Detected lcore 29 as core 29 on socket 0 00:08:56.874 EAL: Detected lcore 30 as core 30 on socket 0 00:08:56.874 EAL: Detected lcore 31 as core 31 on socket 0 00:08:56.874 EAL: Detected lcore 32 as core 32 on socket 0 00:08:56.874 EAL: Detected lcore 33 as core 33 on socket 0 00:08:56.874 EAL: Detected lcore 34 as core 34 on socket 0 00:08:56.874 EAL: Detected lcore 35 as core 35 on socket 0 00:08:56.874 EAL: Detected lcore 36 as core 0 on socket 1 00:08:56.874 EAL: Detected lcore 37 as core 1 on socket 1 00:08:56.874 EAL: Detected lcore 38 as core 2 on socket 1 00:08:56.874 EAL: Detected lcore 39 as core 3 on socket 1 00:08:56.874 EAL: Detected lcore 40 as core 4 on socket 1 00:08:56.874 EAL: Detected lcore 41 as core 5 on socket 1 00:08:56.874 EAL: Detected lcore 42 as core 6 on socket 1 00:08:56.874 EAL: Detected lcore 43 as core 7 on socket 1 00:08:56.874 EAL: Detected lcore 44 as core 8 on socket 1 00:08:56.874 EAL: Detected lcore 45 as core 9 on socket 1 00:08:56.874 EAL: Detected lcore 46 as core 10 on socket 1 00:08:56.874 EAL: Detected lcore 47 as core 11 on socket 1 00:08:56.874 EAL: Detected lcore 48 as core 12 on socket 1 00:08:56.874 EAL: Detected lcore 49 as core 13 on socket 1 00:08:56.874 EAL: Detected lcore 50 as core 14 on socket 1 00:08:56.874 EAL: Detected lcore 51 as core 15 on socket 1 00:08:56.874 EAL: Detected lcore 52 as core 16 on socket 1 00:08:56.874 EAL: Detected lcore 53 as core 17 on socket 1 00:08:56.874 EAL: Detected lcore 54 as core 18 on socket 1 00:08:56.874 EAL: Detected lcore 55 as core 19 on socket 1 00:08:56.874 EAL: Detected lcore 56 as core 20 on socket 1 00:08:56.874 EAL: Detected lcore 57 as core 21 on socket 1 00:08:56.874 EAL: Detected lcore 58 as core 22 on socket 1 00:08:56.874 EAL: Detected lcore 59 as core 23 on socket 1 00:08:56.874 EAL: Detected lcore 60 as core 24 on socket 1 00:08:56.874 EAL: Detected lcore 61 as core 25 on socket 1 00:08:56.874 EAL: Detected lcore 62 as core 26 on socket 1 00:08:56.874 EAL: Detected lcore 63 as core 27 on socket 1 00:08:56.874 EAL: Detected lcore 64 as core 28 on socket 1 00:08:56.874 EAL: Detected lcore 65 as core 29 on socket 1 00:08:56.874 EAL: Detected lcore 66 as core 30 on socket 1 00:08:56.874 EAL: Detected lcore 67 as core 31 on socket 1 00:08:56.874 EAL: Detected lcore 68 as core 32 on socket 1 00:08:56.874 EAL: Detected lcore 69 as core 33 on socket 1 00:08:56.874 EAL: Detected lcore 70 as core 34 on socket 1 00:08:56.874 EAL: Detected lcore 71 as core 35 on socket 1 00:08:56.874 EAL: Detected lcore 72 as core 0 on socket 0 00:08:56.874 EAL: Detected lcore 73 as core 1 on socket 0 00:08:56.874 EAL: Detected lcore 74 as core 2 on socket 0 00:08:56.874 EAL: Detected lcore 75 as core 3 on socket 0 00:08:56.874 EAL: Detected lcore 76 as core 4 on socket 0 00:08:56.874 EAL: Detected lcore 77 as core 5 on socket 0 00:08:56.874 EAL: Detected lcore 78 as core 6 on socket 0 00:08:56.874 EAL: Detected lcore 79 as core 7 on socket 0 00:08:56.874 EAL: Detected lcore 80 as core 8 on socket 0 00:08:56.874 EAL: Detected lcore 81 as core 9 on socket 0 00:08:56.874 EAL: Detected lcore 82 as core 10 on socket 0 00:08:56.874 EAL: Detected lcore 83 as core 11 on socket 0 00:08:56.874 EAL: Detected lcore 84 as core 12 on socket 0 00:08:56.874 EAL: Detected lcore 85 as core 13 on socket 0 00:08:56.874 EAL: Detected lcore 86 as core 14 on socket 0 00:08:56.874 EAL: Detected lcore 87 as core 15 on socket 0 00:08:56.874 EAL: Detected lcore 88 as core 16 on socket 0 00:08:56.874 EAL: Detected lcore 89 as core 17 on socket 0 00:08:56.874 EAL: Detected lcore 90 as core 18 on socket 0 00:08:56.874 EAL: Detected lcore 91 as core 19 on socket 0 00:08:56.874 EAL: Detected lcore 92 as core 20 on socket 0 00:08:56.874 EAL: Detected lcore 93 as core 21 on socket 0 00:08:56.874 EAL: Detected lcore 94 as core 22 on socket 0 00:08:56.874 EAL: Detected lcore 95 as core 23 on socket 0 00:08:56.874 EAL: Detected lcore 96 as core 24 on socket 0 00:08:56.874 EAL: Detected lcore 97 as core 25 on socket 0 00:08:56.874 EAL: Detected lcore 98 as core 26 on socket 0 00:08:56.874 EAL: Detected lcore 99 as core 27 on socket 0 00:08:56.874 EAL: Detected lcore 100 as core 28 on socket 0 00:08:56.874 EAL: Detected lcore 101 as core 29 on socket 0 00:08:56.874 EAL: Detected lcore 102 as core 30 on socket 0 00:08:56.874 EAL: Detected lcore 103 as core 31 on socket 0 00:08:56.874 EAL: Detected lcore 104 as core 32 on socket 0 00:08:56.875 EAL: Detected lcore 105 as core 33 on socket 0 00:08:56.875 EAL: Detected lcore 106 as core 34 on socket 0 00:08:56.875 EAL: Detected lcore 107 as core 35 on socket 0 00:08:56.875 EAL: Detected lcore 108 as core 0 on socket 1 00:08:56.875 EAL: Detected lcore 109 as core 1 on socket 1 00:08:56.875 EAL: Detected lcore 110 as core 2 on socket 1 00:08:56.875 EAL: Detected lcore 111 as core 3 on socket 1 00:08:56.875 EAL: Detected lcore 112 as core 4 on socket 1 00:08:56.875 EAL: Detected lcore 113 as core 5 on socket 1 00:08:56.875 EAL: Detected lcore 114 as core 6 on socket 1 00:08:56.875 EAL: Detected lcore 115 as core 7 on socket 1 00:08:56.875 EAL: Detected lcore 116 as core 8 on socket 1 00:08:56.875 EAL: Detected lcore 117 as core 9 on socket 1 00:08:56.875 EAL: Detected lcore 118 as core 10 on socket 1 00:08:56.875 EAL: Detected lcore 119 as core 11 on socket 1 00:08:56.875 EAL: Detected lcore 120 as core 12 on socket 1 00:08:56.875 EAL: Detected lcore 121 as core 13 on socket 1 00:08:56.875 EAL: Detected lcore 122 as core 14 on socket 1 00:08:56.875 EAL: Detected lcore 123 as core 15 on socket 1 00:08:56.875 EAL: Detected lcore 124 as core 16 on socket 1 00:08:56.875 EAL: Detected lcore 125 as core 17 on socket 1 00:08:56.875 EAL: Detected lcore 126 as core 18 on socket 1 00:08:56.875 EAL: Detected lcore 127 as core 19 on socket 1 00:08:56.875 EAL: Skipped lcore 128 as core 20 on socket 1 00:08:56.875 EAL: Skipped lcore 129 as core 21 on socket 1 00:08:56.875 EAL: Skipped lcore 130 as core 22 on socket 1 00:08:56.875 EAL: Skipped lcore 131 as core 23 on socket 1 00:08:56.875 EAL: Skipped lcore 132 as core 24 on socket 1 00:08:56.875 EAL: Skipped lcore 133 as core 25 on socket 1 00:08:56.875 EAL: Skipped lcore 134 as core 26 on socket 1 00:08:56.875 EAL: Skipped lcore 135 as core 27 on socket 1 00:08:56.875 EAL: Skipped lcore 136 as core 28 on socket 1 00:08:56.875 EAL: Skipped lcore 137 as core 29 on socket 1 00:08:56.875 EAL: Skipped lcore 138 as core 30 on socket 1 00:08:56.875 EAL: Skipped lcore 139 as core 31 on socket 1 00:08:56.875 EAL: Skipped lcore 140 as core 32 on socket 1 00:08:56.875 EAL: Skipped lcore 141 as core 33 on socket 1 00:08:56.875 EAL: Skipped lcore 142 as core 34 on socket 1 00:08:56.875 EAL: Skipped lcore 143 as core 35 on socket 1 00:08:56.875 EAL: Maximum logical cores by configuration: 128 00:08:56.875 EAL: Detected CPU lcores: 128 00:08:56.875 EAL: Detected NUMA nodes: 2 00:08:56.875 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:56.875 EAL: Detected shared linkage of DPDK 00:08:56.875 EAL: No shared files mode enabled, IPC will be disabled 00:08:56.875 EAL: Bus pci wants IOVA as 'DC' 00:08:56.875 EAL: Buses did not request a specific IOVA mode. 00:08:56.875 EAL: IOMMU is available, selecting IOVA as VA mode. 00:08:56.875 EAL: Selected IOVA mode 'VA' 00:08:56.875 EAL: Probing VFIO support... 00:08:56.875 EAL: IOMMU type 1 (Type 1) is supported 00:08:56.875 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:56.875 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:56.875 EAL: VFIO support initialized 00:08:56.875 EAL: Ask a virtual area of 0x2e000 bytes 00:08:56.875 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:56.875 EAL: Setting up physically contiguous memory... 00:08:56.875 EAL: Setting maximum number of open files to 524288 00:08:56.875 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:56.875 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:08:56.875 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:56.875 EAL: Ask a virtual area of 0x61000 bytes 00:08:56.875 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:56.875 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:56.875 EAL: Ask a virtual area of 0x400000000 bytes 00:08:56.875 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:56.875 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:56.875 EAL: Ask a virtual area of 0x61000 bytes 00:08:56.875 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:56.875 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:56.875 EAL: Ask a virtual area of 0x400000000 bytes 00:08:56.875 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:56.875 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:56.875 EAL: Ask a virtual area of 0x61000 bytes 00:08:56.875 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:56.875 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:56.875 EAL: Ask a virtual area of 0x400000000 bytes 00:08:56.875 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:56.875 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:56.875 EAL: Ask a virtual area of 0x61000 bytes 00:08:56.875 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:56.875 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:56.875 EAL: Ask a virtual area of 0x400000000 bytes 00:08:56.875 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:56.875 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:56.875 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:08:56.875 EAL: Ask a virtual area of 0x61000 bytes 00:08:56.875 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:08:56.875 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:56.875 EAL: Ask a virtual area of 0x400000000 bytes 00:08:56.875 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:08:56.875 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:08:56.875 EAL: Ask a virtual area of 0x61000 bytes 00:08:56.875 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:08:56.875 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:56.875 EAL: Ask a virtual area of 0x400000000 bytes 00:08:56.875 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:08:56.875 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:08:56.875 EAL: Ask a virtual area of 0x61000 bytes 00:08:56.875 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:08:56.875 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:56.875 EAL: Ask a virtual area of 0x400000000 bytes 00:08:56.875 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:08:56.875 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:08:56.875 EAL: Ask a virtual area of 0x61000 bytes 00:08:56.875 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:08:56.875 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:56.875 EAL: Ask a virtual area of 0x400000000 bytes 00:08:56.875 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:08:56.875 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:08:56.875 EAL: Hugepages will be freed exactly as allocated. 00:08:56.875 EAL: No shared files mode enabled, IPC is disabled 00:08:56.875 EAL: No shared files mode enabled, IPC is disabled 00:08:56.875 EAL: TSC frequency is ~2400000 KHz 00:08:56.875 EAL: Main lcore 0 is ready (tid=7f18a82bfa00;cpuset=[0]) 00:08:56.875 EAL: Trying to obtain current memory policy. 00:08:56.875 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.875 EAL: Restoring previous memory policy: 0 00:08:56.875 EAL: request: mp_malloc_sync 00:08:56.875 EAL: No shared files mode enabled, IPC is disabled 00:08:56.875 EAL: Heap on socket 0 was expanded by 2MB 00:08:56.875 EAL: No shared files mode enabled, IPC is disabled 00:08:56.875 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:56.875 EAL: Mem event callback 'spdk:(nil)' registered 00:08:56.875 00:08:56.875 00:08:56.875 CUnit - A unit testing framework for C - Version 2.1-3 00:08:56.875 http://cunit.sourceforge.net/ 00:08:56.875 00:08:56.875 00:08:56.875 Suite: components_suite 00:08:56.875 Test: vtophys_malloc_test ...passed 00:08:56.875 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:56.875 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.875 EAL: Restoring previous memory policy: 4 00:08:56.875 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.875 EAL: request: mp_malloc_sync 00:08:56.875 EAL: No shared files mode enabled, IPC is disabled 00:08:56.875 EAL: Heap on socket 0 was expanded by 4MB 00:08:56.875 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.875 EAL: request: mp_malloc_sync 00:08:56.875 EAL: No shared files mode enabled, IPC is disabled 00:08:56.875 EAL: Heap on socket 0 was shrunk by 4MB 00:08:56.875 EAL: Trying to obtain current memory policy. 00:08:56.875 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.875 EAL: Restoring previous memory policy: 4 00:08:56.875 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.875 EAL: request: mp_malloc_sync 00:08:56.875 EAL: No shared files mode enabled, IPC is disabled 00:08:56.875 EAL: Heap on socket 0 was expanded by 6MB 00:08:56.875 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.875 EAL: request: mp_malloc_sync 00:08:56.875 EAL: No shared files mode enabled, IPC is disabled 00:08:56.875 EAL: Heap on socket 0 was shrunk by 6MB 00:08:56.875 EAL: Trying to obtain current memory policy. 00:08:56.875 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.875 EAL: Restoring previous memory policy: 4 00:08:56.875 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.875 EAL: request: mp_malloc_sync 00:08:56.875 EAL: No shared files mode enabled, IPC is disabled 00:08:56.875 EAL: Heap on socket 0 was expanded by 10MB 00:08:56.875 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.875 EAL: request: mp_malloc_sync 00:08:56.875 EAL: No shared files mode enabled, IPC is disabled 00:08:56.875 EAL: Heap on socket 0 was shrunk by 10MB 00:08:56.875 EAL: Trying to obtain current memory policy. 00:08:56.875 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.875 EAL: Restoring previous memory policy: 4 00:08:56.875 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.875 EAL: request: mp_malloc_sync 00:08:56.875 EAL: No shared files mode enabled, IPC is disabled 00:08:56.876 EAL: Heap on socket 0 was expanded by 18MB 00:08:56.876 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.876 EAL: request: mp_malloc_sync 00:08:56.876 EAL: No shared files mode enabled, IPC is disabled 00:08:56.876 EAL: Heap on socket 0 was shrunk by 18MB 00:08:56.876 EAL: Trying to obtain current memory policy. 00:08:56.876 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.876 EAL: Restoring previous memory policy: 4 00:08:56.876 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.876 EAL: request: mp_malloc_sync 00:08:56.876 EAL: No shared files mode enabled, IPC is disabled 00:08:56.876 EAL: Heap on socket 0 was expanded by 34MB 00:08:56.876 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.876 EAL: request: mp_malloc_sync 00:08:56.876 EAL: No shared files mode enabled, IPC is disabled 00:08:56.876 EAL: Heap on socket 0 was shrunk by 34MB 00:08:56.876 EAL: Trying to obtain current memory policy. 00:08:56.876 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.876 EAL: Restoring previous memory policy: 4 00:08:56.876 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.876 EAL: request: mp_malloc_sync 00:08:56.876 EAL: No shared files mode enabled, IPC is disabled 00:08:56.876 EAL: Heap on socket 0 was expanded by 66MB 00:08:56.876 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.876 EAL: request: mp_malloc_sync 00:08:56.876 EAL: No shared files mode enabled, IPC is disabled 00:08:56.876 EAL: Heap on socket 0 was shrunk by 66MB 00:08:56.876 EAL: Trying to obtain current memory policy. 00:08:56.876 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.876 EAL: Restoring previous memory policy: 4 00:08:56.876 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.876 EAL: request: mp_malloc_sync 00:08:56.876 EAL: No shared files mode enabled, IPC is disabled 00:08:56.876 EAL: Heap on socket 0 was expanded by 130MB 00:08:56.876 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.876 EAL: request: mp_malloc_sync 00:08:56.876 EAL: No shared files mode enabled, IPC is disabled 00:08:56.876 EAL: Heap on socket 0 was shrunk by 130MB 00:08:56.876 EAL: Trying to obtain current memory policy. 00:08:56.876 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:56.876 EAL: Restoring previous memory policy: 4 00:08:56.876 EAL: Calling mem event callback 'spdk:(nil)' 00:08:56.876 EAL: request: mp_malloc_sync 00:08:56.876 EAL: No shared files mode enabled, IPC is disabled 00:08:56.876 EAL: Heap on socket 0 was expanded by 258MB 00:08:57.136 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.136 EAL: request: mp_malloc_sync 00:08:57.136 EAL: No shared files mode enabled, IPC is disabled 00:08:57.136 EAL: Heap on socket 0 was shrunk by 258MB 00:08:57.136 EAL: Trying to obtain current memory policy. 00:08:57.136 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:57.136 EAL: Restoring previous memory policy: 4 00:08:57.136 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.136 EAL: request: mp_malloc_sync 00:08:57.136 EAL: No shared files mode enabled, IPC is disabled 00:08:57.136 EAL: Heap on socket 0 was expanded by 514MB 00:08:57.136 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.136 EAL: request: mp_malloc_sync 00:08:57.136 EAL: No shared files mode enabled, IPC is disabled 00:08:57.136 EAL: Heap on socket 0 was shrunk by 514MB 00:08:57.136 EAL: Trying to obtain current memory policy. 00:08:57.136 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:57.396 EAL: Restoring previous memory policy: 4 00:08:57.396 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.396 EAL: request: mp_malloc_sync 00:08:57.396 EAL: No shared files mode enabled, IPC is disabled 00:08:57.396 EAL: Heap on socket 0 was expanded by 1026MB 00:08:57.396 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.658 EAL: request: mp_malloc_sync 00:08:57.658 EAL: No shared files mode enabled, IPC is disabled 00:08:57.658 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:57.658 passed 00:08:57.658 00:08:57.658 Run Summary: Type Total Ran Passed Failed Inactive 00:08:57.658 suites 1 1 n/a 0 0 00:08:57.658 tests 2 2 2 0 0 00:08:57.658 asserts 497 497 497 0 n/a 00:08:57.658 00:08:57.658 Elapsed time = 0.688 seconds 00:08:57.658 EAL: Calling mem event callback 'spdk:(nil)' 00:08:57.658 EAL: request: mp_malloc_sync 00:08:57.658 EAL: No shared files mode enabled, IPC is disabled 00:08:57.658 EAL: Heap on socket 0 was shrunk by 2MB 00:08:57.658 EAL: No shared files mode enabled, IPC is disabled 00:08:57.658 EAL: No shared files mode enabled, IPC is disabled 00:08:57.658 EAL: No shared files mode enabled, IPC is disabled 00:08:57.658 00:08:57.658 real 0m0.838s 00:08:57.658 user 0m0.450s 00:08:57.658 sys 0m0.362s 00:08:57.658 14:07:02 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.658 14:07:02 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:57.658 ************************************ 00:08:57.658 END TEST env_vtophys 00:08:57.658 ************************************ 00:08:57.658 14:07:02 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:57.658 14:07:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.658 14:07:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.658 14:07:02 env -- common/autotest_common.sh@10 -- # set +x 00:08:57.658 ************************************ 00:08:57.658 START TEST env_pci 00:08:57.658 ************************************ 00:08:57.658 14:07:02 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:57.658 00:08:57.658 00:08:57.658 CUnit - A unit testing framework for C - Version 2.1-3 00:08:57.658 http://cunit.sourceforge.net/ 00:08:57.658 00:08:57.658 00:08:57.658 Suite: pci 00:08:57.658 Test: pci_hook ...[2024-11-25 14:07:02.623617] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3191616 has claimed it 00:08:57.658 EAL: Cannot find device (10000:00:01.0) 00:08:57.658 EAL: Failed to attach device on primary process 00:08:57.658 passed 00:08:57.658 00:08:57.658 Run Summary: Type Total Ran Passed Failed Inactive 00:08:57.658 suites 1 1 n/a 0 0 00:08:57.658 tests 1 1 1 0 0 00:08:57.658 asserts 25 25 25 0 n/a 00:08:57.658 00:08:57.658 Elapsed time = 0.030 seconds 00:08:57.658 00:08:57.658 real 0m0.052s 00:08:57.658 user 0m0.018s 00:08:57.658 sys 0m0.034s 00:08:57.658 14:07:02 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.658 14:07:02 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:57.658 ************************************ 00:08:57.658 END TEST env_pci 00:08:57.658 ************************************ 00:08:57.658 14:07:02 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:57.658 14:07:02 env -- env/env.sh@15 -- # uname 00:08:57.658 14:07:02 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:57.658 14:07:02 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:57.658 14:07:02 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:57.658 14:07:02 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:57.658 14:07:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.658 14:07:02 env -- common/autotest_common.sh@10 -- # set +x 00:08:57.920 ************************************ 00:08:57.920 START TEST env_dpdk_post_init 00:08:57.920 ************************************ 00:08:57.920 14:07:02 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:57.920 EAL: Detected CPU lcores: 128 00:08:57.920 EAL: Detected NUMA nodes: 2 00:08:57.920 EAL: Detected shared linkage of DPDK 00:08:57.920 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:57.920 EAL: Selected IOVA mode 'VA' 00:08:57.920 EAL: VFIO support initialized 00:08:57.920 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:57.920 EAL: Using IOMMU type 1 (Type 1) 00:08:58.181 EAL: Ignore mapping IO port bar(1) 00:08:58.181 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:08:58.181 EAL: Ignore mapping IO port bar(1) 00:08:58.442 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:08:58.442 EAL: Ignore mapping IO port bar(1) 00:08:58.702 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:08:58.702 EAL: Ignore mapping IO port bar(1) 00:08:58.962 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:08:58.962 EAL: Ignore mapping IO port bar(1) 00:08:58.962 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:08:59.223 EAL: Ignore mapping IO port bar(1) 00:08:59.223 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:08:59.483 EAL: Ignore mapping IO port bar(1) 00:08:59.484 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:08:59.744 EAL: Ignore mapping IO port bar(1) 00:08:59.744 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:09:00.005 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:09:00.005 EAL: Ignore mapping IO port bar(1) 00:09:00.266 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:09:00.266 EAL: Ignore mapping IO port bar(1) 00:09:00.527 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:09:00.527 EAL: Ignore mapping IO port bar(1) 00:09:00.527 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:09:00.788 EAL: Ignore mapping IO port bar(1) 00:09:00.788 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:09:01.048 EAL: Ignore mapping IO port bar(1) 00:09:01.048 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:09:01.310 EAL: Ignore mapping IO port bar(1) 00:09:01.310 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:09:01.310 EAL: Ignore mapping IO port bar(1) 00:09:01.570 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:09:01.571 EAL: Ignore mapping IO port bar(1) 00:09:01.831 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:09:01.831 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:09:01.831 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:09:01.831 Starting DPDK initialization... 00:09:01.831 Starting SPDK post initialization... 00:09:01.831 SPDK NVMe probe 00:09:01.831 Attaching to 0000:65:00.0 00:09:01.831 Attached to 0000:65:00.0 00:09:01.831 Cleaning up... 00:09:03.749 00:09:03.749 real 0m5.750s 00:09:03.749 user 0m0.114s 00:09:03.749 sys 0m0.188s 00:09:03.749 14:07:08 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.749 14:07:08 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:03.749 ************************************ 00:09:03.749 END TEST env_dpdk_post_init 00:09:03.749 ************************************ 00:09:03.749 14:07:08 env -- env/env.sh@26 -- # uname 00:09:03.749 14:07:08 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:03.749 14:07:08 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:09:03.749 14:07:08 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:03.749 14:07:08 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.749 14:07:08 env -- common/autotest_common.sh@10 -- # set +x 00:09:03.749 ************************************ 00:09:03.749 START TEST env_mem_callbacks 00:09:03.749 ************************************ 00:09:03.749 14:07:08 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:09:03.749 EAL: Detected CPU lcores: 128 00:09:03.749 EAL: Detected NUMA nodes: 2 00:09:03.749 EAL: Detected shared linkage of DPDK 00:09:03.749 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:03.749 EAL: Selected IOVA mode 'VA' 00:09:03.749 EAL: VFIO support initialized 00:09:03.749 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:03.749 00:09:03.749 00:09:03.749 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.749 http://cunit.sourceforge.net/ 00:09:03.749 00:09:03.749 00:09:03.749 Suite: memory 00:09:03.749 Test: test ... 00:09:03.749 register 0x200000200000 2097152 00:09:03.749 malloc 3145728 00:09:03.749 register 0x200000400000 4194304 00:09:03.749 buf 0x200000500000 len 3145728 PASSED 00:09:03.749 malloc 64 00:09:03.749 buf 0x2000004fff40 len 64 PASSED 00:09:03.749 malloc 4194304 00:09:03.749 register 0x200000800000 6291456 00:09:03.749 buf 0x200000a00000 len 4194304 PASSED 00:09:03.749 free 0x200000500000 3145728 00:09:03.749 free 0x2000004fff40 64 00:09:03.749 unregister 0x200000400000 4194304 PASSED 00:09:03.749 free 0x200000a00000 4194304 00:09:03.749 unregister 0x200000800000 6291456 PASSED 00:09:03.750 malloc 8388608 00:09:03.750 register 0x200000400000 10485760 00:09:03.750 buf 0x200000600000 len 8388608 PASSED 00:09:03.750 free 0x200000600000 8388608 00:09:03.750 unregister 0x200000400000 10485760 PASSED 00:09:03.750 passed 00:09:03.750 00:09:03.750 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.750 suites 1 1 n/a 0 0 00:09:03.750 tests 1 1 1 0 0 00:09:03.750 asserts 15 15 15 0 n/a 00:09:03.750 00:09:03.750 Elapsed time = 0.010 seconds 00:09:03.750 00:09:03.750 real 0m0.067s 00:09:03.750 user 0m0.024s 00:09:03.750 sys 0m0.044s 00:09:03.750 14:07:08 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.750 14:07:08 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:03.750 ************************************ 00:09:03.750 END TEST env_mem_callbacks 00:09:03.750 ************************************ 00:09:03.750 00:09:03.750 real 0m7.539s 00:09:03.750 user 0m1.060s 00:09:03.750 sys 0m1.039s 00:09:03.750 14:07:08 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.750 14:07:08 env -- common/autotest_common.sh@10 -- # set +x 00:09:03.750 ************************************ 00:09:03.750 END TEST env 00:09:03.750 ************************************ 00:09:03.750 14:07:08 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:09:03.750 14:07:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:03.750 14:07:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.750 14:07:08 -- common/autotest_common.sh@10 -- # set +x 00:09:03.750 ************************************ 00:09:03.750 START TEST rpc 00:09:03.750 ************************************ 00:09:03.750 14:07:08 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:09:04.012 * Looking for test storage... 00:09:04.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:09:04.012 14:07:08 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:04.012 14:07:08 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:04.012 14:07:08 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:04.012 14:07:08 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:04.012 14:07:08 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:04.012 14:07:08 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:04.012 14:07:08 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:04.012 14:07:08 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.012 14:07:08 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:04.012 14:07:08 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:04.012 14:07:08 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:04.012 14:07:08 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:04.012 14:07:08 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:04.012 14:07:08 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:04.012 14:07:08 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:04.012 14:07:08 rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:04.012 14:07:08 rpc -- scripts/common.sh@345 -- # : 1 00:09:04.012 14:07:08 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:04.012 14:07:08 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.012 14:07:08 rpc -- scripts/common.sh@365 -- # decimal 1 00:09:04.012 14:07:08 rpc -- scripts/common.sh@353 -- # local d=1 00:09:04.012 14:07:08 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.012 14:07:08 rpc -- scripts/common.sh@355 -- # echo 1 00:09:04.012 14:07:08 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:04.012 14:07:08 rpc -- scripts/common.sh@366 -- # decimal 2 00:09:04.012 14:07:08 rpc -- scripts/common.sh@353 -- # local d=2 00:09:04.012 14:07:08 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.012 14:07:08 rpc -- scripts/common.sh@355 -- # echo 2 00:09:04.012 14:07:08 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:04.012 14:07:08 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:04.012 14:07:08 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:04.012 14:07:08 rpc -- scripts/common.sh@368 -- # return 0 00:09:04.012 14:07:08 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.012 14:07:08 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:04.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.012 --rc genhtml_branch_coverage=1 00:09:04.012 --rc genhtml_function_coverage=1 00:09:04.012 --rc genhtml_legend=1 00:09:04.012 --rc geninfo_all_blocks=1 00:09:04.012 --rc geninfo_unexecuted_blocks=1 00:09:04.012 00:09:04.012 ' 00:09:04.012 14:07:08 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:04.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.012 --rc genhtml_branch_coverage=1 00:09:04.012 --rc genhtml_function_coverage=1 00:09:04.012 --rc genhtml_legend=1 00:09:04.012 --rc geninfo_all_blocks=1 00:09:04.012 --rc geninfo_unexecuted_blocks=1 00:09:04.012 00:09:04.012 ' 00:09:04.012 14:07:08 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:04.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.012 --rc genhtml_branch_coverage=1 00:09:04.012 --rc genhtml_function_coverage=1 00:09:04.012 --rc genhtml_legend=1 00:09:04.012 --rc geninfo_all_blocks=1 00:09:04.012 --rc geninfo_unexecuted_blocks=1 00:09:04.012 00:09:04.012 ' 00:09:04.012 14:07:08 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:04.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.012 --rc genhtml_branch_coverage=1 00:09:04.012 --rc genhtml_function_coverage=1 00:09:04.012 --rc genhtml_legend=1 00:09:04.012 --rc geninfo_all_blocks=1 00:09:04.012 --rc geninfo_unexecuted_blocks=1 00:09:04.012 00:09:04.012 ' 00:09:04.012 14:07:08 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3192960 00:09:04.012 14:07:08 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:09:04.012 14:07:08 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:04.012 14:07:08 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3192960 00:09:04.012 14:07:08 rpc -- common/autotest_common.sh@835 -- # '[' -z 3192960 ']' 00:09:04.012 14:07:08 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.012 14:07:08 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.012 14:07:08 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.012 14:07:08 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.012 14:07:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.012 [2024-11-25 14:07:09.031448] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:09:04.012 [2024-11-25 14:07:09.031514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3192960 ] 00:09:04.274 [2024-11-25 14:07:09.124338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.274 [2024-11-25 14:07:09.176315] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:04.274 [2024-11-25 14:07:09.176373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3192960' to capture a snapshot of events at runtime. 00:09:04.274 [2024-11-25 14:07:09.176382] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.274 [2024-11-25 14:07:09.176390] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.274 [2024-11-25 14:07:09.176396] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3192960 for offline analysis/debug. 00:09:04.274 [2024-11-25 14:07:09.177155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.846 14:07:09 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.846 14:07:09 rpc -- common/autotest_common.sh@868 -- # return 0 00:09:04.847 14:07:09 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:09:04.847 14:07:09 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:09:04.847 14:07:09 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:04.847 14:07:09 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:04.847 14:07:09 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:04.847 14:07:09 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.847 14:07:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.847 ************************************ 00:09:04.847 START TEST rpc_integrity 00:09:04.847 ************************************ 00:09:04.847 14:07:09 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:04.847 14:07:09 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:04.847 14:07:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.847 14:07:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:04.847 14:07:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.847 14:07:09 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:04.847 14:07:09 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:05.108 14:07:09 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:05.108 14:07:09 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:05.108 14:07:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.108 14:07:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.108 14:07:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.108 14:07:09 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:05.108 14:07:09 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:05.108 14:07:09 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.108 14:07:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.108 14:07:09 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.108 14:07:09 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:05.108 { 00:09:05.108 "name": "Malloc0", 00:09:05.108 "aliases": [ 00:09:05.108 "865b470e-f872-4453-83b8-2c0fcb1d0514" 00:09:05.108 ], 00:09:05.108 "product_name": "Malloc disk", 00:09:05.108 "block_size": 512, 00:09:05.108 "num_blocks": 16384, 00:09:05.108 "uuid": "865b470e-f872-4453-83b8-2c0fcb1d0514", 00:09:05.108 "assigned_rate_limits": { 00:09:05.108 "rw_ios_per_sec": 0, 00:09:05.108 "rw_mbytes_per_sec": 0, 00:09:05.108 "r_mbytes_per_sec": 0, 00:09:05.108 "w_mbytes_per_sec": 0 00:09:05.108 }, 00:09:05.108 "claimed": false, 00:09:05.108 "zoned": false, 00:09:05.108 "supported_io_types": { 00:09:05.108 "read": true, 00:09:05.108 "write": true, 00:09:05.108 "unmap": true, 00:09:05.108 "flush": true, 00:09:05.108 "reset": true, 00:09:05.108 "nvme_admin": false, 00:09:05.108 "nvme_io": false, 00:09:05.108 "nvme_io_md": false, 00:09:05.108 "write_zeroes": true, 00:09:05.108 "zcopy": true, 00:09:05.108 "get_zone_info": false, 00:09:05.108 "zone_management": false, 00:09:05.108 "zone_append": false, 00:09:05.108 "compare": false, 00:09:05.108 "compare_and_write": false, 00:09:05.108 "abort": true, 00:09:05.108 "seek_hole": false, 00:09:05.108 "seek_data": false, 00:09:05.108 "copy": true, 00:09:05.108 "nvme_iov_md": false 00:09:05.108 }, 00:09:05.108 "memory_domains": [ 00:09:05.108 { 00:09:05.108 "dma_device_id": "system", 00:09:05.108 "dma_device_type": 1 00:09:05.108 }, 00:09:05.108 { 00:09:05.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.108 "dma_device_type": 2 00:09:05.108 } 00:09:05.108 ], 00:09:05.108 "driver_specific": {} 00:09:05.108 } 00:09:05.108 ]' 00:09:05.109 14:07:09 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:05.109 14:07:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:05.109 14:07:10 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:05.109 14:07:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.109 14:07:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.109 [2024-11-25 14:07:10.047594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:05.109 [2024-11-25 14:07:10.047646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.109 [2024-11-25 14:07:10.047665] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf7ee30 00:09:05.109 [2024-11-25 14:07:10.047674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.109 [2024-11-25 14:07:10.049341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.109 [2024-11-25 14:07:10.049378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:05.109 Passthru0 00:09:05.109 14:07:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.109 14:07:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:05.109 14:07:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.109 14:07:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.109 14:07:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.109 14:07:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:05.109 { 00:09:05.109 "name": "Malloc0", 00:09:05.109 "aliases": [ 00:09:05.109 "865b470e-f872-4453-83b8-2c0fcb1d0514" 00:09:05.109 ], 00:09:05.109 "product_name": "Malloc disk", 00:09:05.109 "block_size": 512, 00:09:05.109 "num_blocks": 16384, 00:09:05.109 "uuid": "865b470e-f872-4453-83b8-2c0fcb1d0514", 00:09:05.109 "assigned_rate_limits": { 00:09:05.109 "rw_ios_per_sec": 0, 00:09:05.109 "rw_mbytes_per_sec": 0, 00:09:05.109 "r_mbytes_per_sec": 0, 00:09:05.109 "w_mbytes_per_sec": 0 00:09:05.109 }, 00:09:05.109 "claimed": true, 00:09:05.109 "claim_type": "exclusive_write", 00:09:05.109 "zoned": false, 00:09:05.109 "supported_io_types": { 00:09:05.109 "read": true, 00:09:05.109 "write": true, 00:09:05.109 "unmap": true, 00:09:05.109 "flush": true, 00:09:05.109 "reset": true, 00:09:05.109 "nvme_admin": false, 00:09:05.109 "nvme_io": false, 00:09:05.109 "nvme_io_md": false, 00:09:05.109 "write_zeroes": true, 00:09:05.109 "zcopy": true, 00:09:05.109 "get_zone_info": false, 00:09:05.109 "zone_management": false, 00:09:05.109 "zone_append": false, 00:09:05.109 "compare": false, 00:09:05.109 "compare_and_write": false, 00:09:05.109 "abort": true, 00:09:05.109 "seek_hole": false, 00:09:05.109 "seek_data": false, 00:09:05.109 "copy": true, 00:09:05.109 "nvme_iov_md": false 00:09:05.109 }, 00:09:05.109 "memory_domains": [ 00:09:05.109 { 00:09:05.109 "dma_device_id": "system", 00:09:05.109 "dma_device_type": 1 00:09:05.109 }, 00:09:05.109 { 00:09:05.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.109 "dma_device_type": 2 00:09:05.109 } 00:09:05.109 ], 00:09:05.109 "driver_specific": {} 00:09:05.109 }, 00:09:05.109 { 00:09:05.109 "name": "Passthru0", 00:09:05.109 "aliases": [ 00:09:05.109 "3e7b16ab-2e73-525e-bbe9-2101be9d5985" 00:09:05.109 ], 00:09:05.109 "product_name": "passthru", 00:09:05.109 "block_size": 512, 00:09:05.109 "num_blocks": 16384, 00:09:05.109 "uuid": "3e7b16ab-2e73-525e-bbe9-2101be9d5985", 00:09:05.109 "assigned_rate_limits": { 00:09:05.109 "rw_ios_per_sec": 0, 00:09:05.109 "rw_mbytes_per_sec": 0, 00:09:05.109 "r_mbytes_per_sec": 0, 00:09:05.109 "w_mbytes_per_sec": 0 00:09:05.109 }, 00:09:05.109 "claimed": false, 00:09:05.109 "zoned": false, 00:09:05.109 "supported_io_types": { 00:09:05.109 "read": true, 00:09:05.109 "write": true, 00:09:05.109 "unmap": true, 00:09:05.109 "flush": true, 00:09:05.109 "reset": true, 00:09:05.109 "nvme_admin": false, 00:09:05.109 "nvme_io": false, 00:09:05.109 "nvme_io_md": false, 00:09:05.109 "write_zeroes": true, 00:09:05.109 "zcopy": true, 00:09:05.109 "get_zone_info": false, 00:09:05.109 "zone_management": false, 00:09:05.109 "zone_append": false, 00:09:05.109 "compare": false, 00:09:05.109 "compare_and_write": false, 00:09:05.109 "abort": true, 00:09:05.109 "seek_hole": false, 00:09:05.109 "seek_data": false, 00:09:05.109 "copy": true, 00:09:05.109 "nvme_iov_md": false 00:09:05.109 }, 00:09:05.109 "memory_domains": [ 00:09:05.109 { 00:09:05.109 "dma_device_id": "system", 00:09:05.109 "dma_device_type": 1 00:09:05.109 }, 00:09:05.109 { 00:09:05.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.109 "dma_device_type": 2 00:09:05.109 } 00:09:05.109 ], 00:09:05.109 "driver_specific": { 00:09:05.109 "passthru": { 00:09:05.109 "name": "Passthru0", 00:09:05.109 "base_bdev_name": "Malloc0" 00:09:05.109 } 00:09:05.109 } 00:09:05.109 } 00:09:05.109 ]' 00:09:05.109 14:07:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:05.109 14:07:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:05.109 14:07:10 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:05.109 14:07:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.109 14:07:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.109 14:07:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.109 14:07:10 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:05.110 14:07:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.110 14:07:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.110 14:07:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.110 14:07:10 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:05.110 14:07:10 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.110 14:07:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.110 14:07:10 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.110 14:07:10 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:05.110 14:07:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:05.371 14:07:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:05.371 00:09:05.371 real 0m0.311s 00:09:05.371 user 0m0.195s 00:09:05.371 sys 0m0.045s 00:09:05.371 14:07:10 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.371 14:07:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.371 ************************************ 00:09:05.371 END TEST rpc_integrity 00:09:05.371 ************************************ 00:09:05.371 14:07:10 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:05.371 14:07:10 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:05.371 14:07:10 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.371 14:07:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.371 ************************************ 00:09:05.371 START TEST rpc_plugins 00:09:05.371 ************************************ 00:09:05.371 14:07:10 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:09:05.371 14:07:10 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:05.371 14:07:10 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.371 14:07:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:05.371 14:07:10 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.371 14:07:10 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:05.371 14:07:10 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:05.371 14:07:10 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.371 14:07:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:05.371 14:07:10 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.371 14:07:10 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:05.371 { 00:09:05.371 "name": "Malloc1", 00:09:05.371 "aliases": [ 00:09:05.371 "2707dc92-9312-41e8-ab1d-94167cdd51c5" 00:09:05.371 ], 00:09:05.371 "product_name": "Malloc disk", 00:09:05.371 "block_size": 4096, 00:09:05.371 "num_blocks": 256, 00:09:05.371 "uuid": "2707dc92-9312-41e8-ab1d-94167cdd51c5", 00:09:05.371 "assigned_rate_limits": { 00:09:05.371 "rw_ios_per_sec": 0, 00:09:05.371 "rw_mbytes_per_sec": 0, 00:09:05.371 "r_mbytes_per_sec": 0, 00:09:05.371 "w_mbytes_per_sec": 0 00:09:05.371 }, 00:09:05.371 "claimed": false, 00:09:05.371 "zoned": false, 00:09:05.371 "supported_io_types": { 00:09:05.371 "read": true, 00:09:05.371 "write": true, 00:09:05.371 "unmap": true, 00:09:05.371 "flush": true, 00:09:05.371 "reset": true, 00:09:05.371 "nvme_admin": false, 00:09:05.371 "nvme_io": false, 00:09:05.371 "nvme_io_md": false, 00:09:05.371 "write_zeroes": true, 00:09:05.371 "zcopy": true, 00:09:05.371 "get_zone_info": false, 00:09:05.371 "zone_management": false, 00:09:05.371 "zone_append": false, 00:09:05.371 "compare": false, 00:09:05.371 "compare_and_write": false, 00:09:05.371 "abort": true, 00:09:05.371 "seek_hole": false, 00:09:05.371 "seek_data": false, 00:09:05.371 "copy": true, 00:09:05.371 "nvme_iov_md": false 00:09:05.371 }, 00:09:05.371 "memory_domains": [ 00:09:05.371 { 00:09:05.371 "dma_device_id": "system", 00:09:05.371 "dma_device_type": 1 00:09:05.371 }, 00:09:05.371 { 00:09:05.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.371 "dma_device_type": 2 00:09:05.371 } 00:09:05.371 ], 00:09:05.371 "driver_specific": {} 00:09:05.371 } 00:09:05.371 ]' 00:09:05.371 14:07:10 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:05.371 14:07:10 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:05.371 14:07:10 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:05.371 14:07:10 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.371 14:07:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:05.371 14:07:10 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.371 14:07:10 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:05.371 14:07:10 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.371 14:07:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:05.371 14:07:10 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.371 14:07:10 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:05.371 14:07:10 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:05.371 14:07:10 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:05.371 00:09:05.371 real 0m0.157s 00:09:05.371 user 0m0.097s 00:09:05.371 sys 0m0.023s 00:09:05.371 14:07:10 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.371 14:07:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:05.371 ************************************ 00:09:05.371 END TEST rpc_plugins 00:09:05.371 ************************************ 00:09:05.632 14:07:10 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:05.632 14:07:10 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:05.632 14:07:10 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.632 14:07:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.632 ************************************ 00:09:05.632 START TEST rpc_trace_cmd_test 00:09:05.632 ************************************ 00:09:05.632 14:07:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:09:05.632 14:07:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:05.632 14:07:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:05.632 14:07:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.632 14:07:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.632 14:07:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.632 14:07:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:05.632 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3192960", 00:09:05.632 "tpoint_group_mask": "0x8", 00:09:05.632 "iscsi_conn": { 00:09:05.632 "mask": "0x2", 00:09:05.632 "tpoint_mask": "0x0" 00:09:05.632 }, 00:09:05.632 "scsi": { 00:09:05.632 "mask": "0x4", 00:09:05.632 "tpoint_mask": "0x0" 00:09:05.632 }, 00:09:05.632 "bdev": { 00:09:05.632 "mask": "0x8", 00:09:05.632 "tpoint_mask": "0xffffffffffffffff" 00:09:05.632 }, 00:09:05.632 "nvmf_rdma": { 00:09:05.632 "mask": "0x10", 00:09:05.632 "tpoint_mask": "0x0" 00:09:05.632 }, 00:09:05.632 "nvmf_tcp": { 00:09:05.632 "mask": "0x20", 00:09:05.632 "tpoint_mask": "0x0" 00:09:05.632 }, 00:09:05.632 "ftl": { 00:09:05.632 "mask": "0x40", 00:09:05.632 "tpoint_mask": "0x0" 00:09:05.632 }, 00:09:05.632 "blobfs": { 00:09:05.632 "mask": "0x80", 00:09:05.632 "tpoint_mask": "0x0" 00:09:05.632 }, 00:09:05.632 "dsa": { 00:09:05.632 "mask": "0x200", 00:09:05.632 "tpoint_mask": "0x0" 00:09:05.632 }, 00:09:05.632 "thread": { 00:09:05.632 "mask": "0x400", 00:09:05.632 "tpoint_mask": "0x0" 00:09:05.632 }, 00:09:05.632 "nvme_pcie": { 00:09:05.632 "mask": "0x800", 00:09:05.632 "tpoint_mask": "0x0" 00:09:05.632 }, 00:09:05.632 "iaa": { 00:09:05.632 "mask": "0x1000", 00:09:05.632 "tpoint_mask": "0x0" 00:09:05.632 }, 00:09:05.632 "nvme_tcp": { 00:09:05.632 "mask": "0x2000", 00:09:05.632 "tpoint_mask": "0x0" 00:09:05.632 }, 00:09:05.632 "bdev_nvme": { 00:09:05.632 "mask": "0x4000", 00:09:05.632 "tpoint_mask": "0x0" 00:09:05.632 }, 00:09:05.632 "sock": { 00:09:05.632 "mask": "0x8000", 00:09:05.632 "tpoint_mask": "0x0" 00:09:05.632 }, 00:09:05.632 "blob": { 00:09:05.632 "mask": "0x10000", 00:09:05.632 "tpoint_mask": "0x0" 00:09:05.632 }, 00:09:05.632 "bdev_raid": { 00:09:05.632 "mask": "0x20000", 00:09:05.632 "tpoint_mask": "0x0" 00:09:05.632 }, 00:09:05.632 "scheduler": { 00:09:05.632 "mask": "0x40000", 00:09:05.632 "tpoint_mask": "0x0" 00:09:05.632 } 00:09:05.632 }' 00:09:05.632 14:07:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:05.632 14:07:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:09:05.632 14:07:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:05.632 14:07:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:05.633 14:07:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:05.633 14:07:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:05.633 14:07:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:05.893 14:07:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:05.894 14:07:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:05.894 14:07:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:05.894 00:09:05.894 real 0m0.252s 00:09:05.894 user 0m0.210s 00:09:05.894 sys 0m0.034s 00:09:05.894 14:07:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.894 14:07:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.894 ************************************ 00:09:05.894 END TEST rpc_trace_cmd_test 00:09:05.894 ************************************ 00:09:05.894 14:07:10 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:05.894 14:07:10 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:05.894 14:07:10 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:05.894 14:07:10 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:05.894 14:07:10 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.894 14:07:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.894 ************************************ 00:09:05.894 START TEST rpc_daemon_integrity 00:09:05.894 ************************************ 00:09:05.894 14:07:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:05.894 14:07:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:05.894 14:07:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.894 14:07:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.894 14:07:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.894 14:07:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:05.894 14:07:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:05.894 14:07:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:05.894 14:07:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:05.894 14:07:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.894 14:07:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.894 14:07:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.894 14:07:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:05.894 14:07:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:05.894 14:07:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.894 14:07:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.894 14:07:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.894 14:07:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:05.894 { 00:09:05.894 "name": "Malloc2", 00:09:05.894 "aliases": [ 00:09:05.894 "fa02dc76-1ec7-4284-bf76-fec98a523868" 00:09:05.894 ], 00:09:05.894 "product_name": "Malloc disk", 00:09:05.894 "block_size": 512, 00:09:05.894 "num_blocks": 16384, 00:09:05.894 "uuid": "fa02dc76-1ec7-4284-bf76-fec98a523868", 00:09:05.894 "assigned_rate_limits": { 00:09:05.894 "rw_ios_per_sec": 0, 00:09:05.894 "rw_mbytes_per_sec": 0, 00:09:05.894 "r_mbytes_per_sec": 0, 00:09:05.894 "w_mbytes_per_sec": 0 00:09:05.894 }, 00:09:05.894 "claimed": false, 00:09:05.894 "zoned": false, 00:09:05.894 "supported_io_types": { 00:09:05.894 "read": true, 00:09:05.894 "write": true, 00:09:05.894 "unmap": true, 00:09:05.894 "flush": true, 00:09:05.894 "reset": true, 00:09:05.894 "nvme_admin": false, 00:09:05.894 "nvme_io": false, 00:09:05.894 "nvme_io_md": false, 00:09:05.894 "write_zeroes": true, 00:09:05.894 "zcopy": true, 00:09:05.894 "get_zone_info": false, 00:09:05.894 "zone_management": false, 00:09:05.894 "zone_append": false, 00:09:05.894 "compare": false, 00:09:05.894 "compare_and_write": false, 00:09:05.894 "abort": true, 00:09:05.894 "seek_hole": false, 00:09:05.894 "seek_data": false, 00:09:05.894 "copy": true, 00:09:05.894 "nvme_iov_md": false 00:09:05.894 }, 00:09:05.894 "memory_domains": [ 00:09:05.894 { 00:09:05.894 "dma_device_id": "system", 00:09:05.894 "dma_device_type": 1 00:09:05.894 }, 00:09:05.894 { 00:09:05.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.894 "dma_device_type": 2 00:09:05.894 } 00:09:05.894 ], 00:09:05.894 "driver_specific": {} 00:09:05.894 } 00:09:05.894 ]' 00:09:05.894 14:07:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:06.154 14:07:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:06.154 14:07:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:06.154 14:07:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.154 14:07:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:06.154 [2024-11-25 14:07:11.010211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:06.154 [2024-11-25 14:07:11.010259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.154 [2024-11-25 14:07:11.010279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf7fb90 00:09:06.154 [2024-11-25 14:07:11.010288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.154 [2024-11-25 14:07:11.011803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.154 [2024-11-25 14:07:11.011839] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:06.154 Passthru0 00:09:06.154 14:07:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.154 14:07:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:06.154 14:07:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.154 14:07:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:06.154 14:07:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.154 14:07:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:06.154 { 00:09:06.154 "name": "Malloc2", 00:09:06.154 "aliases": [ 00:09:06.154 "fa02dc76-1ec7-4284-bf76-fec98a523868" 00:09:06.154 ], 00:09:06.154 "product_name": "Malloc disk", 00:09:06.154 "block_size": 512, 00:09:06.154 "num_blocks": 16384, 00:09:06.154 "uuid": "fa02dc76-1ec7-4284-bf76-fec98a523868", 00:09:06.154 "assigned_rate_limits": { 00:09:06.154 "rw_ios_per_sec": 0, 00:09:06.154 "rw_mbytes_per_sec": 0, 00:09:06.154 "r_mbytes_per_sec": 0, 00:09:06.154 "w_mbytes_per_sec": 0 00:09:06.154 }, 00:09:06.154 "claimed": true, 00:09:06.154 "claim_type": "exclusive_write", 00:09:06.154 "zoned": false, 00:09:06.154 "supported_io_types": { 00:09:06.154 "read": true, 00:09:06.154 "write": true, 00:09:06.154 "unmap": true, 00:09:06.154 "flush": true, 00:09:06.154 "reset": true, 00:09:06.154 "nvme_admin": false, 00:09:06.155 "nvme_io": false, 00:09:06.155 "nvme_io_md": false, 00:09:06.155 "write_zeroes": true, 00:09:06.155 "zcopy": true, 00:09:06.155 "get_zone_info": false, 00:09:06.155 "zone_management": false, 00:09:06.155 "zone_append": false, 00:09:06.155 "compare": false, 00:09:06.155 "compare_and_write": false, 00:09:06.155 "abort": true, 00:09:06.155 "seek_hole": false, 00:09:06.155 "seek_data": false, 00:09:06.155 "copy": true, 00:09:06.155 "nvme_iov_md": false 00:09:06.155 }, 00:09:06.155 "memory_domains": [ 00:09:06.155 { 00:09:06.155 "dma_device_id": "system", 00:09:06.155 "dma_device_type": 1 00:09:06.155 }, 00:09:06.155 { 00:09:06.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.155 "dma_device_type": 2 00:09:06.155 } 00:09:06.155 ], 00:09:06.155 "driver_specific": {} 00:09:06.155 }, 00:09:06.155 { 00:09:06.155 "name": "Passthru0", 00:09:06.155 "aliases": [ 00:09:06.155 "93999df5-f72b-521b-9b54-416b7c3b25b0" 00:09:06.155 ], 00:09:06.155 "product_name": "passthru", 00:09:06.155 "block_size": 512, 00:09:06.155 "num_blocks": 16384, 00:09:06.155 "uuid": "93999df5-f72b-521b-9b54-416b7c3b25b0", 00:09:06.155 "assigned_rate_limits": { 00:09:06.155 "rw_ios_per_sec": 0, 00:09:06.155 "rw_mbytes_per_sec": 0, 00:09:06.155 "r_mbytes_per_sec": 0, 00:09:06.155 "w_mbytes_per_sec": 0 00:09:06.155 }, 00:09:06.155 "claimed": false, 00:09:06.155 "zoned": false, 00:09:06.155 "supported_io_types": { 00:09:06.155 "read": true, 00:09:06.155 "write": true, 00:09:06.155 "unmap": true, 00:09:06.155 "flush": true, 00:09:06.155 "reset": true, 00:09:06.155 "nvme_admin": false, 00:09:06.155 "nvme_io": false, 00:09:06.155 "nvme_io_md": false, 00:09:06.155 "write_zeroes": true, 00:09:06.155 "zcopy": true, 00:09:06.155 "get_zone_info": false, 00:09:06.155 "zone_management": false, 00:09:06.155 "zone_append": false, 00:09:06.155 "compare": false, 00:09:06.155 "compare_and_write": false, 00:09:06.155 "abort": true, 00:09:06.155 "seek_hole": false, 00:09:06.155 "seek_data": false, 00:09:06.155 "copy": true, 00:09:06.155 "nvme_iov_md": false 00:09:06.155 }, 00:09:06.155 "memory_domains": [ 00:09:06.155 { 00:09:06.155 "dma_device_id": "system", 00:09:06.155 "dma_device_type": 1 00:09:06.155 }, 00:09:06.155 { 00:09:06.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.155 "dma_device_type": 2 00:09:06.155 } 00:09:06.155 ], 00:09:06.155 "driver_specific": { 00:09:06.155 "passthru": { 00:09:06.155 "name": "Passthru0", 00:09:06.155 "base_bdev_name": "Malloc2" 00:09:06.155 } 00:09:06.155 } 00:09:06.155 } 00:09:06.155 ]' 00:09:06.155 14:07:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:06.155 14:07:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:06.155 14:07:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:06.155 14:07:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.155 14:07:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:06.155 14:07:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.155 14:07:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:06.155 14:07:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.155 14:07:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:06.155 14:07:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.155 14:07:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:06.155 14:07:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.155 14:07:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:06.155 14:07:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.155 14:07:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:06.155 14:07:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:06.155 14:07:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:06.155 00:09:06.155 real 0m0.306s 00:09:06.155 user 0m0.198s 00:09:06.155 sys 0m0.041s 00:09:06.155 14:07:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.155 14:07:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:06.155 ************************************ 00:09:06.155 END TEST rpc_daemon_integrity 00:09:06.155 ************************************ 00:09:06.155 14:07:11 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:06.155 14:07:11 rpc -- rpc/rpc.sh@84 -- # killprocess 3192960 00:09:06.155 14:07:11 rpc -- common/autotest_common.sh@954 -- # '[' -z 3192960 ']' 00:09:06.155 14:07:11 rpc -- common/autotest_common.sh@958 -- # kill -0 3192960 00:09:06.155 14:07:11 rpc -- common/autotest_common.sh@959 -- # uname 00:09:06.155 14:07:11 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:06.155 14:07:11 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3192960 00:09:06.415 14:07:11 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:06.415 14:07:11 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:06.415 14:07:11 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3192960' 00:09:06.415 killing process with pid 3192960 00:09:06.415 14:07:11 rpc -- common/autotest_common.sh@973 -- # kill 3192960 00:09:06.415 14:07:11 rpc -- common/autotest_common.sh@978 -- # wait 3192960 00:09:06.675 00:09:06.675 real 0m2.744s 00:09:06.675 user 0m3.500s 00:09:06.675 sys 0m0.857s 00:09:06.675 14:07:11 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.675 14:07:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.675 ************************************ 00:09:06.675 END TEST rpc 00:09:06.675 ************************************ 00:09:06.675 14:07:11 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:09:06.675 14:07:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:06.675 14:07:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.675 14:07:11 -- common/autotest_common.sh@10 -- # set +x 00:09:06.675 ************************************ 00:09:06.675 START TEST skip_rpc 00:09:06.675 ************************************ 00:09:06.675 14:07:11 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:09:06.675 * Looking for test storage... 00:09:06.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:09:06.675 14:07:11 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:06.675 14:07:11 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:06.675 14:07:11 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:06.937 14:07:11 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@345 -- # : 1 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.937 14:07:11 skip_rpc -- scripts/common.sh@368 -- # return 0 00:09:06.937 14:07:11 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.937 14:07:11 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:06.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.937 --rc genhtml_branch_coverage=1 00:09:06.937 --rc genhtml_function_coverage=1 00:09:06.937 --rc genhtml_legend=1 00:09:06.937 --rc geninfo_all_blocks=1 00:09:06.937 --rc geninfo_unexecuted_blocks=1 00:09:06.937 00:09:06.937 ' 00:09:06.937 14:07:11 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:06.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.937 --rc genhtml_branch_coverage=1 00:09:06.937 --rc genhtml_function_coverage=1 00:09:06.937 --rc genhtml_legend=1 00:09:06.937 --rc geninfo_all_blocks=1 00:09:06.937 --rc geninfo_unexecuted_blocks=1 00:09:06.937 00:09:06.937 ' 00:09:06.937 14:07:11 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:06.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.937 --rc genhtml_branch_coverage=1 00:09:06.937 --rc genhtml_function_coverage=1 00:09:06.937 --rc genhtml_legend=1 00:09:06.937 --rc geninfo_all_blocks=1 00:09:06.937 --rc geninfo_unexecuted_blocks=1 00:09:06.937 00:09:06.937 ' 00:09:06.937 14:07:11 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:06.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.937 --rc genhtml_branch_coverage=1 00:09:06.937 --rc genhtml_function_coverage=1 00:09:06.937 --rc genhtml_legend=1 00:09:06.937 --rc geninfo_all_blocks=1 00:09:06.937 --rc geninfo_unexecuted_blocks=1 00:09:06.937 00:09:06.937 ' 00:09:06.937 14:07:11 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:09:06.937 14:07:11 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:09:06.937 14:07:11 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:06.937 14:07:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:06.937 14:07:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.937 14:07:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.937 ************************************ 00:09:06.937 START TEST skip_rpc 00:09:06.937 ************************************ 00:09:06.937 14:07:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:09:06.937 14:07:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3193814 00:09:06.937 14:07:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:06.938 14:07:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:06.938 14:07:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:06.938 [2024-11-25 14:07:11.919003] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:09:06.938 [2024-11-25 14:07:11.919063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3193814 ] 00:09:06.938 [2024-11-25 14:07:12.009988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.200 [2024-11-25 14:07:12.062498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3193814 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3193814 ']' 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3193814 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3193814 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3193814' 00:09:12.489 killing process with pid 3193814 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3193814 00:09:12.489 14:07:16 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3193814 00:09:12.489 00:09:12.489 real 0m5.264s 00:09:12.489 user 0m5.011s 00:09:12.489 sys 0m0.303s 00:09:12.489 14:07:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.489 14:07:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.489 ************************************ 00:09:12.489 END TEST skip_rpc 00:09:12.489 ************************************ 00:09:12.489 14:07:17 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:12.489 14:07:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:12.489 14:07:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.489 14:07:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.489 ************************************ 00:09:12.489 START TEST skip_rpc_with_json 00:09:12.489 ************************************ 00:09:12.489 14:07:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:09:12.489 14:07:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:12.489 14:07:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3194850 00:09:12.489 14:07:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:12.489 14:07:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3194850 00:09:12.489 14:07:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:12.489 14:07:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3194850 ']' 00:09:12.489 14:07:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.489 14:07:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.489 14:07:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.489 14:07:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.489 14:07:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:12.489 [2024-11-25 14:07:17.246747] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:09:12.489 [2024-11-25 14:07:17.246797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3194850 ] 00:09:12.489 [2024-11-25 14:07:17.331314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.489 [2024-11-25 14:07:17.361795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.061 14:07:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.061 14:07:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:09:13.061 14:07:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:13.061 14:07:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.061 14:07:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:13.061 [2024-11-25 14:07:18.047493] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:13.061 request: 00:09:13.061 { 00:09:13.061 "trtype": "tcp", 00:09:13.061 "method": "nvmf_get_transports", 00:09:13.061 "req_id": 1 00:09:13.061 } 00:09:13.061 Got JSON-RPC error response 00:09:13.061 response: 00:09:13.061 { 00:09:13.061 "code": -19, 00:09:13.061 "message": "No such device" 00:09:13.061 } 00:09:13.061 14:07:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:13.061 14:07:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:13.061 14:07:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.061 14:07:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:13.061 [2024-11-25 14:07:18.059587] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.061 14:07:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.061 14:07:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:13.061 14:07:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.061 14:07:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:13.322 14:07:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.322 14:07:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:09:13.322 { 00:09:13.322 "subsystems": [ 00:09:13.322 { 00:09:13.322 "subsystem": "fsdev", 00:09:13.322 "config": [ 00:09:13.322 { 00:09:13.322 "method": "fsdev_set_opts", 00:09:13.322 "params": { 00:09:13.322 "fsdev_io_pool_size": 65535, 00:09:13.322 "fsdev_io_cache_size": 256 00:09:13.322 } 00:09:13.322 } 00:09:13.322 ] 00:09:13.322 }, 00:09:13.322 { 00:09:13.322 "subsystem": "vfio_user_target", 00:09:13.322 "config": null 00:09:13.322 }, 00:09:13.322 { 00:09:13.322 "subsystem": "keyring", 00:09:13.322 "config": [] 00:09:13.322 }, 00:09:13.322 { 00:09:13.322 "subsystem": "iobuf", 00:09:13.322 "config": [ 00:09:13.322 { 00:09:13.322 "method": "iobuf_set_options", 00:09:13.322 "params": { 00:09:13.322 "small_pool_count": 8192, 00:09:13.322 "large_pool_count": 1024, 00:09:13.322 "small_bufsize": 8192, 00:09:13.322 "large_bufsize": 135168, 00:09:13.322 "enable_numa": false 00:09:13.322 } 00:09:13.322 } 00:09:13.322 ] 00:09:13.322 }, 00:09:13.322 { 00:09:13.322 "subsystem": "sock", 00:09:13.322 "config": [ 00:09:13.322 { 00:09:13.322 "method": "sock_set_default_impl", 00:09:13.322 "params": { 00:09:13.322 "impl_name": "posix" 00:09:13.322 } 00:09:13.322 }, 00:09:13.322 { 00:09:13.322 "method": "sock_impl_set_options", 00:09:13.322 "params": { 00:09:13.322 "impl_name": "ssl", 00:09:13.322 "recv_buf_size": 4096, 00:09:13.322 "send_buf_size": 4096, 00:09:13.322 "enable_recv_pipe": true, 00:09:13.322 "enable_quickack": false, 00:09:13.322 "enable_placement_id": 0, 00:09:13.322 "enable_zerocopy_send_server": true, 00:09:13.322 "enable_zerocopy_send_client": false, 00:09:13.322 "zerocopy_threshold": 0, 00:09:13.322 "tls_version": 0, 00:09:13.322 "enable_ktls": false 00:09:13.322 } 00:09:13.322 }, 00:09:13.322 { 00:09:13.322 "method": "sock_impl_set_options", 00:09:13.322 "params": { 00:09:13.322 "impl_name": "posix", 00:09:13.322 "recv_buf_size": 2097152, 00:09:13.322 "send_buf_size": 2097152, 00:09:13.322 "enable_recv_pipe": true, 00:09:13.322 "enable_quickack": false, 00:09:13.322 "enable_placement_id": 0, 00:09:13.322 "enable_zerocopy_send_server": true, 00:09:13.322 "enable_zerocopy_send_client": false, 00:09:13.322 "zerocopy_threshold": 0, 00:09:13.322 "tls_version": 0, 00:09:13.322 "enable_ktls": false 00:09:13.322 } 00:09:13.322 } 00:09:13.322 ] 00:09:13.322 }, 00:09:13.322 { 00:09:13.322 "subsystem": "vmd", 00:09:13.322 "config": [] 00:09:13.322 }, 00:09:13.322 { 00:09:13.322 "subsystem": "accel", 00:09:13.322 "config": [ 00:09:13.322 { 00:09:13.322 "method": "accel_set_options", 00:09:13.322 "params": { 00:09:13.322 "small_cache_size": 128, 00:09:13.322 "large_cache_size": 16, 00:09:13.322 "task_count": 2048, 00:09:13.322 "sequence_count": 2048, 00:09:13.322 "buf_count": 2048 00:09:13.322 } 00:09:13.322 } 00:09:13.322 ] 00:09:13.322 }, 00:09:13.322 { 00:09:13.322 "subsystem": "bdev", 00:09:13.322 "config": [ 00:09:13.322 { 00:09:13.322 "method": "bdev_set_options", 00:09:13.322 "params": { 00:09:13.322 "bdev_io_pool_size": 65535, 00:09:13.322 "bdev_io_cache_size": 256, 00:09:13.322 "bdev_auto_examine": true, 00:09:13.322 "iobuf_small_cache_size": 128, 00:09:13.322 "iobuf_large_cache_size": 16 00:09:13.322 } 00:09:13.322 }, 00:09:13.322 { 00:09:13.322 "method": "bdev_raid_set_options", 00:09:13.322 "params": { 00:09:13.322 "process_window_size_kb": 1024, 00:09:13.322 "process_max_bandwidth_mb_sec": 0 00:09:13.322 } 00:09:13.323 }, 00:09:13.323 { 00:09:13.323 "method": "bdev_iscsi_set_options", 00:09:13.323 "params": { 00:09:13.323 "timeout_sec": 30 00:09:13.323 } 00:09:13.323 }, 00:09:13.323 { 00:09:13.323 "method": "bdev_nvme_set_options", 00:09:13.323 "params": { 00:09:13.323 "action_on_timeout": "none", 00:09:13.323 "timeout_us": 0, 00:09:13.323 "timeout_admin_us": 0, 00:09:13.323 "keep_alive_timeout_ms": 10000, 00:09:13.323 "arbitration_burst": 0, 00:09:13.323 "low_priority_weight": 0, 00:09:13.323 "medium_priority_weight": 0, 00:09:13.323 "high_priority_weight": 0, 00:09:13.323 "nvme_adminq_poll_period_us": 10000, 00:09:13.323 "nvme_ioq_poll_period_us": 0, 00:09:13.323 "io_queue_requests": 0, 00:09:13.323 "delay_cmd_submit": true, 00:09:13.323 "transport_retry_count": 4, 00:09:13.323 "bdev_retry_count": 3, 00:09:13.323 "transport_ack_timeout": 0, 00:09:13.323 "ctrlr_loss_timeout_sec": 0, 00:09:13.323 "reconnect_delay_sec": 0, 00:09:13.323 "fast_io_fail_timeout_sec": 0, 00:09:13.323 "disable_auto_failback": false, 00:09:13.323 "generate_uuids": false, 00:09:13.323 "transport_tos": 0, 00:09:13.323 "nvme_error_stat": false, 00:09:13.323 "rdma_srq_size": 0, 00:09:13.323 "io_path_stat": false, 00:09:13.323 "allow_accel_sequence": false, 00:09:13.323 "rdma_max_cq_size": 0, 00:09:13.323 "rdma_cm_event_timeout_ms": 0, 00:09:13.323 "dhchap_digests": [ 00:09:13.323 "sha256", 00:09:13.323 "sha384", 00:09:13.323 "sha512" 00:09:13.323 ], 00:09:13.323 "dhchap_dhgroups": [ 00:09:13.323 "null", 00:09:13.323 "ffdhe2048", 00:09:13.323 "ffdhe3072", 00:09:13.323 "ffdhe4096", 00:09:13.323 "ffdhe6144", 00:09:13.323 "ffdhe8192" 00:09:13.323 ] 00:09:13.323 } 00:09:13.323 }, 00:09:13.323 { 00:09:13.323 "method": "bdev_nvme_set_hotplug", 00:09:13.323 "params": { 00:09:13.323 "period_us": 100000, 00:09:13.323 "enable": false 00:09:13.323 } 00:09:13.323 }, 00:09:13.323 { 00:09:13.323 "method": "bdev_wait_for_examine" 00:09:13.323 } 00:09:13.323 ] 00:09:13.323 }, 00:09:13.323 { 00:09:13.323 "subsystem": "scsi", 00:09:13.323 "config": null 00:09:13.323 }, 00:09:13.323 { 00:09:13.323 "subsystem": "scheduler", 00:09:13.323 "config": [ 00:09:13.323 { 00:09:13.323 "method": "framework_set_scheduler", 00:09:13.323 "params": { 00:09:13.323 "name": "static" 00:09:13.323 } 00:09:13.323 } 00:09:13.323 ] 00:09:13.323 }, 00:09:13.323 { 00:09:13.323 "subsystem": "vhost_scsi", 00:09:13.323 "config": [] 00:09:13.323 }, 00:09:13.323 { 00:09:13.323 "subsystem": "vhost_blk", 00:09:13.323 "config": [] 00:09:13.323 }, 00:09:13.323 { 00:09:13.323 "subsystem": "ublk", 00:09:13.323 "config": [] 00:09:13.323 }, 00:09:13.323 { 00:09:13.323 "subsystem": "nbd", 00:09:13.323 "config": [] 00:09:13.323 }, 00:09:13.323 { 00:09:13.323 "subsystem": "nvmf", 00:09:13.323 "config": [ 00:09:13.323 { 00:09:13.323 "method": "nvmf_set_config", 00:09:13.323 "params": { 00:09:13.323 "discovery_filter": "match_any", 00:09:13.323 "admin_cmd_passthru": { 00:09:13.323 "identify_ctrlr": false 00:09:13.323 }, 00:09:13.323 "dhchap_digests": [ 00:09:13.323 "sha256", 00:09:13.323 "sha384", 00:09:13.323 "sha512" 00:09:13.323 ], 00:09:13.323 "dhchap_dhgroups": [ 00:09:13.323 "null", 00:09:13.323 "ffdhe2048", 00:09:13.323 "ffdhe3072", 00:09:13.323 "ffdhe4096", 00:09:13.323 "ffdhe6144", 00:09:13.323 "ffdhe8192" 00:09:13.323 ] 00:09:13.323 } 00:09:13.323 }, 00:09:13.323 { 00:09:13.323 "method": "nvmf_set_max_subsystems", 00:09:13.323 "params": { 00:09:13.323 "max_subsystems": 1024 00:09:13.323 } 00:09:13.323 }, 00:09:13.323 { 00:09:13.323 "method": "nvmf_set_crdt", 00:09:13.323 "params": { 00:09:13.323 "crdt1": 0, 00:09:13.323 "crdt2": 0, 00:09:13.323 "crdt3": 0 00:09:13.323 } 00:09:13.323 }, 00:09:13.323 { 00:09:13.323 "method": "nvmf_create_transport", 00:09:13.323 "params": { 00:09:13.323 "trtype": "TCP", 00:09:13.323 "max_queue_depth": 128, 00:09:13.323 "max_io_qpairs_per_ctrlr": 127, 00:09:13.323 "in_capsule_data_size": 4096, 00:09:13.323 "max_io_size": 131072, 00:09:13.323 "io_unit_size": 131072, 00:09:13.323 "max_aq_depth": 128, 00:09:13.323 "num_shared_buffers": 511, 00:09:13.323 "buf_cache_size": 4294967295, 00:09:13.323 "dif_insert_or_strip": false, 00:09:13.323 "zcopy": false, 00:09:13.323 "c2h_success": true, 00:09:13.323 "sock_priority": 0, 00:09:13.323 "abort_timeout_sec": 1, 00:09:13.323 "ack_timeout": 0, 00:09:13.323 "data_wr_pool_size": 0 00:09:13.323 } 00:09:13.323 } 00:09:13.323 ] 00:09:13.323 }, 00:09:13.323 { 00:09:13.323 "subsystem": "iscsi", 00:09:13.323 "config": [ 00:09:13.323 { 00:09:13.323 "method": "iscsi_set_options", 00:09:13.323 "params": { 00:09:13.323 "node_base": "iqn.2016-06.io.spdk", 00:09:13.323 "max_sessions": 128, 00:09:13.323 "max_connections_per_session": 2, 00:09:13.323 "max_queue_depth": 64, 00:09:13.323 "default_time2wait": 2, 00:09:13.323 "default_time2retain": 20, 00:09:13.323 "first_burst_length": 8192, 00:09:13.323 "immediate_data": true, 00:09:13.323 "allow_duplicated_isid": false, 00:09:13.323 "error_recovery_level": 0, 00:09:13.323 "nop_timeout": 60, 00:09:13.323 "nop_in_interval": 30, 00:09:13.323 "disable_chap": false, 00:09:13.323 "require_chap": false, 00:09:13.323 "mutual_chap": false, 00:09:13.323 "chap_group": 0, 00:09:13.323 "max_large_datain_per_connection": 64, 00:09:13.323 "max_r2t_per_connection": 4, 00:09:13.323 "pdu_pool_size": 36864, 00:09:13.323 "immediate_data_pool_size": 16384, 00:09:13.323 "data_out_pool_size": 2048 00:09:13.323 } 00:09:13.323 } 00:09:13.323 ] 00:09:13.323 } 00:09:13.323 ] 00:09:13.323 } 00:09:13.323 14:07:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:13.323 14:07:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3194850 00:09:13.323 14:07:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3194850 ']' 00:09:13.323 14:07:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3194850 00:09:13.323 14:07:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:13.323 14:07:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.323 14:07:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3194850 00:09:13.323 14:07:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.323 14:07:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.323 14:07:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3194850' 00:09:13.323 killing process with pid 3194850 00:09:13.323 14:07:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3194850 00:09:13.323 14:07:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3194850 00:09:13.584 14:07:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3195190 00:09:13.584 14:07:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:13.584 14:07:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:09:18.990 14:07:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3195190 00:09:18.990 14:07:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3195190 ']' 00:09:18.990 14:07:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3195190 00:09:18.990 14:07:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:18.990 14:07:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.990 14:07:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3195190 00:09:18.990 14:07:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:18.990 14:07:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:18.990 14:07:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3195190' 00:09:18.990 killing process with pid 3195190 00:09:18.990 14:07:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3195190 00:09:18.990 14:07:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3195190 00:09:18.990 14:07:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:09:18.990 14:07:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:09:18.990 00:09:18.990 real 0m6.565s 00:09:18.990 user 0m6.490s 00:09:18.990 sys 0m0.556s 00:09:18.990 14:07:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.990 14:07:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:18.990 ************************************ 00:09:18.990 END TEST skip_rpc_with_json 00:09:18.990 ************************************ 00:09:18.990 14:07:23 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:18.990 14:07:23 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:18.990 14:07:23 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.990 14:07:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.990 ************************************ 00:09:18.990 START TEST skip_rpc_with_delay 00:09:18.990 ************************************ 00:09:18.990 14:07:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:09:18.990 14:07:23 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:18.990 14:07:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:09:18.990 14:07:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:18.990 14:07:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:18.990 14:07:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:18.990 14:07:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:18.990 14:07:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:18.990 14:07:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:18.990 14:07:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:18.990 14:07:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:18.991 14:07:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:09:18.991 14:07:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:18.991 [2024-11-25 14:07:23.897261] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:18.991 14:07:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:09:18.991 14:07:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:18.991 14:07:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:18.991 14:07:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:18.991 00:09:18.991 real 0m0.077s 00:09:18.991 user 0m0.049s 00:09:18.991 sys 0m0.028s 00:09:18.991 14:07:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.991 14:07:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:18.991 ************************************ 00:09:18.991 END TEST skip_rpc_with_delay 00:09:18.991 ************************************ 00:09:18.991 14:07:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:18.991 14:07:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:18.991 14:07:23 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:18.991 14:07:23 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:18.991 14:07:23 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.991 14:07:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.991 ************************************ 00:09:18.991 START TEST exit_on_failed_rpc_init 00:09:18.991 ************************************ 00:09:18.991 14:07:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:09:18.991 14:07:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3196269 00:09:18.991 14:07:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3196269 00:09:18.991 14:07:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:18.991 14:07:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3196269 ']' 00:09:18.991 14:07:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.991 14:07:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.991 14:07:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.991 14:07:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.991 14:07:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:18.991 [2024-11-25 14:07:24.057191] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:09:18.991 [2024-11-25 14:07:24.057238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3196269 ] 00:09:19.251 [2024-11-25 14:07:24.142353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.251 [2024-11-25 14:07:24.173354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.822 14:07:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.822 14:07:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:09:19.822 14:07:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:19.822 14:07:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:09:19.822 14:07:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:09:19.822 14:07:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:09:19.822 14:07:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:19.822 14:07:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:19.822 14:07:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:19.822 14:07:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:19.822 14:07:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:19.822 14:07:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:19.822 14:07:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:19.822 14:07:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:09:19.822 14:07:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:09:19.822 [2024-11-25 14:07:24.896540] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:09:19.822 [2024-11-25 14:07:24.896595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3196424 ] 00:09:20.082 [2024-11-25 14:07:24.982186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.082 [2024-11-25 14:07:25.018311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.082 [2024-11-25 14:07:25.018361] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:20.082 [2024-11-25 14:07:25.018370] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:20.082 [2024-11-25 14:07:25.018377] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:20.082 14:07:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:09:20.082 14:07:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:20.082 14:07:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:09:20.082 14:07:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:09:20.082 14:07:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:09:20.082 14:07:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:20.082 14:07:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:20.082 14:07:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3196269 00:09:20.082 14:07:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3196269 ']' 00:09:20.083 14:07:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3196269 00:09:20.083 14:07:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:09:20.083 14:07:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.083 14:07:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3196269 00:09:20.083 14:07:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.083 14:07:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.083 14:07:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3196269' 00:09:20.083 killing process with pid 3196269 00:09:20.083 14:07:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3196269 00:09:20.083 14:07:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3196269 00:09:20.343 00:09:20.343 real 0m1.306s 00:09:20.343 user 0m1.512s 00:09:20.343 sys 0m0.385s 00:09:20.343 14:07:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.343 14:07:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:20.343 ************************************ 00:09:20.343 END TEST exit_on_failed_rpc_init 00:09:20.343 ************************************ 00:09:20.343 14:07:25 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:09:20.343 00:09:20.343 real 0m13.741s 00:09:20.343 user 0m13.286s 00:09:20.343 sys 0m1.607s 00:09:20.343 14:07:25 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.343 14:07:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.343 ************************************ 00:09:20.343 END TEST skip_rpc 00:09:20.343 ************************************ 00:09:20.343 14:07:25 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:09:20.343 14:07:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:20.343 14:07:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.343 14:07:25 -- common/autotest_common.sh@10 -- # set +x 00:09:20.343 ************************************ 00:09:20.343 START TEST rpc_client 00:09:20.343 ************************************ 00:09:20.343 14:07:25 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:09:20.604 * Looking for test storage... 00:09:20.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:09:20.604 14:07:25 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:20.604 14:07:25 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:09:20.604 14:07:25 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:20.604 14:07:25 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@345 -- # : 1 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@353 -- # local d=1 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@355 -- # echo 1 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@353 -- # local d=2 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@355 -- # echo 2 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.604 14:07:25 rpc_client -- scripts/common.sh@368 -- # return 0 00:09:20.604 14:07:25 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.604 14:07:25 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:20.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.604 --rc genhtml_branch_coverage=1 00:09:20.604 --rc genhtml_function_coverage=1 00:09:20.604 --rc genhtml_legend=1 00:09:20.604 --rc geninfo_all_blocks=1 00:09:20.604 --rc geninfo_unexecuted_blocks=1 00:09:20.604 00:09:20.604 ' 00:09:20.604 14:07:25 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:20.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.604 --rc genhtml_branch_coverage=1 00:09:20.604 --rc genhtml_function_coverage=1 00:09:20.604 --rc genhtml_legend=1 00:09:20.604 --rc geninfo_all_blocks=1 00:09:20.604 --rc geninfo_unexecuted_blocks=1 00:09:20.604 00:09:20.604 ' 00:09:20.604 14:07:25 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:20.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.604 --rc genhtml_branch_coverage=1 00:09:20.604 --rc genhtml_function_coverage=1 00:09:20.604 --rc genhtml_legend=1 00:09:20.604 --rc geninfo_all_blocks=1 00:09:20.604 --rc geninfo_unexecuted_blocks=1 00:09:20.604 00:09:20.604 ' 00:09:20.604 14:07:25 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:20.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.604 --rc genhtml_branch_coverage=1 00:09:20.604 --rc genhtml_function_coverage=1 00:09:20.604 --rc genhtml_legend=1 00:09:20.604 --rc geninfo_all_blocks=1 00:09:20.604 --rc geninfo_unexecuted_blocks=1 00:09:20.604 00:09:20.604 ' 00:09:20.604 14:07:25 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:09:20.604 OK 00:09:20.604 14:07:25 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:20.604 00:09:20.604 real 0m0.222s 00:09:20.604 user 0m0.143s 00:09:20.604 sys 0m0.093s 00:09:20.604 14:07:25 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.604 14:07:25 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:20.604 ************************************ 00:09:20.604 END TEST rpc_client 00:09:20.605 ************************************ 00:09:20.605 14:07:25 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:09:20.605 14:07:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:20.605 14:07:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.605 14:07:25 -- common/autotest_common.sh@10 -- # set +x 00:09:20.866 ************************************ 00:09:20.866 START TEST json_config 00:09:20.866 ************************************ 00:09:20.866 14:07:25 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:09:20.866 14:07:25 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:20.866 14:07:25 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:09:20.866 14:07:25 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:20.866 14:07:25 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:20.866 14:07:25 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:20.866 14:07:25 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:20.866 14:07:25 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:20.866 14:07:25 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.866 14:07:25 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:09:20.866 14:07:25 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:09:20.866 14:07:25 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:09:20.866 14:07:25 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:09:20.866 14:07:25 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:09:20.866 14:07:25 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:09:20.866 14:07:25 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:20.866 14:07:25 json_config -- scripts/common.sh@344 -- # case "$op" in 00:09:20.866 14:07:25 json_config -- scripts/common.sh@345 -- # : 1 00:09:20.866 14:07:25 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:20.866 14:07:25 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.867 14:07:25 json_config -- scripts/common.sh@365 -- # decimal 1 00:09:20.867 14:07:25 json_config -- scripts/common.sh@353 -- # local d=1 00:09:20.867 14:07:25 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.867 14:07:25 json_config -- scripts/common.sh@355 -- # echo 1 00:09:20.867 14:07:25 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:09:20.867 14:07:25 json_config -- scripts/common.sh@366 -- # decimal 2 00:09:20.867 14:07:25 json_config -- scripts/common.sh@353 -- # local d=2 00:09:20.867 14:07:25 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.867 14:07:25 json_config -- scripts/common.sh@355 -- # echo 2 00:09:20.867 14:07:25 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:09:20.867 14:07:25 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:20.867 14:07:25 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:20.867 14:07:25 json_config -- scripts/common.sh@368 -- # return 0 00:09:20.867 14:07:25 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.867 14:07:25 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:20.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.867 --rc genhtml_branch_coverage=1 00:09:20.867 --rc genhtml_function_coverage=1 00:09:20.867 --rc genhtml_legend=1 00:09:20.867 --rc geninfo_all_blocks=1 00:09:20.867 --rc geninfo_unexecuted_blocks=1 00:09:20.867 00:09:20.867 ' 00:09:20.867 14:07:25 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:20.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.867 --rc genhtml_branch_coverage=1 00:09:20.867 --rc genhtml_function_coverage=1 00:09:20.867 --rc genhtml_legend=1 00:09:20.867 --rc geninfo_all_blocks=1 00:09:20.867 --rc geninfo_unexecuted_blocks=1 00:09:20.867 00:09:20.867 ' 00:09:20.867 14:07:25 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:20.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.867 --rc genhtml_branch_coverage=1 00:09:20.867 --rc genhtml_function_coverage=1 00:09:20.867 --rc genhtml_legend=1 00:09:20.867 --rc geninfo_all_blocks=1 00:09:20.867 --rc geninfo_unexecuted_blocks=1 00:09:20.867 00:09:20.867 ' 00:09:20.867 14:07:25 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:20.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.867 --rc genhtml_branch_coverage=1 00:09:20.867 --rc genhtml_function_coverage=1 00:09:20.867 --rc genhtml_legend=1 00:09:20.867 --rc geninfo_all_blocks=1 00:09:20.867 --rc geninfo_unexecuted_blocks=1 00:09:20.867 00:09:20.867 ' 00:09:20.867 14:07:25 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:20.867 14:07:25 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:09:20.867 14:07:25 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.867 14:07:25 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.867 14:07:25 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.867 14:07:25 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.867 14:07:25 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.867 14:07:25 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.867 14:07:25 json_config -- paths/export.sh@5 -- # export PATH 00:09:20.867 14:07:25 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@51 -- # : 0 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:20.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:20.867 14:07:25 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:20.867 14:07:25 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:09:20.867 14:07:25 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:20.867 14:07:25 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:20.867 14:07:25 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:20.867 14:07:25 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:20.867 14:07:25 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:09:20.867 14:07:25 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:20.867 14:07:25 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:20.867 14:07:25 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:20.868 14:07:25 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:20.868 14:07:25 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:20.868 14:07:25 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:09:20.868 14:07:25 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:20.868 14:07:25 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:20.868 14:07:25 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:20.868 14:07:25 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:09:20.868 INFO: JSON configuration test init 00:09:20.868 14:07:25 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:09:20.868 14:07:25 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:09:20.868 14:07:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:20.868 14:07:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:20.868 14:07:25 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:09:20.868 14:07:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:20.868 14:07:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:20.868 14:07:25 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:09:20.868 14:07:25 json_config -- json_config/common.sh@9 -- # local app=target 00:09:20.868 14:07:25 json_config -- json_config/common.sh@10 -- # shift 00:09:20.868 14:07:25 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:20.868 14:07:25 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:20.868 14:07:25 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:20.868 14:07:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:20.868 14:07:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:20.868 14:07:25 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3196740 00:09:20.868 14:07:25 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:20.868 Waiting for target to run... 00:09:20.868 14:07:25 json_config -- json_config/common.sh@25 -- # waitforlisten 3196740 /var/tmp/spdk_tgt.sock 00:09:20.868 14:07:25 json_config -- common/autotest_common.sh@835 -- # '[' -z 3196740 ']' 00:09:20.868 14:07:25 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:20.868 14:07:25 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:20.868 14:07:25 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.868 14:07:25 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:20.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:20.868 14:07:25 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.868 14:07:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:21.129 [2024-11-25 14:07:25.997779] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:09:21.129 [2024-11-25 14:07:25.997836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3196740 ] 00:09:21.390 [2024-11-25 14:07:26.401361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.390 [2024-11-25 14:07:26.434396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.962 14:07:26 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.962 14:07:26 json_config -- common/autotest_common.sh@868 -- # return 0 00:09:21.962 14:07:26 json_config -- json_config/common.sh@26 -- # echo '' 00:09:21.962 00:09:21.962 14:07:26 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:09:21.962 14:07:26 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:09:21.962 14:07:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:21.962 14:07:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:21.962 14:07:26 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:09:21.962 14:07:26 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:09:21.962 14:07:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:21.962 14:07:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:21.962 14:07:26 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:21.962 14:07:26 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:09:21.962 14:07:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:22.534 14:07:27 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:09:22.534 14:07:27 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:22.534 14:07:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:22.534 14:07:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:22.534 14:07:27 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:09:22.534 14:07:27 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:22.534 14:07:27 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:09:22.534 14:07:27 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:09:22.534 14:07:27 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:09:22.534 14:07:27 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:09:22.534 14:07:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:22.534 14:07:27 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:09:22.534 14:07:27 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:09:22.534 14:07:27 json_config -- json_config/json_config.sh@51 -- # local get_types 00:09:22.534 14:07:27 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:09:22.534 14:07:27 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:09:22.534 14:07:27 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:09:22.534 14:07:27 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:09:22.534 14:07:27 json_config -- json_config/json_config.sh@54 -- # sort 00:09:22.534 14:07:27 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:09:22.534 14:07:27 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:09:22.534 14:07:27 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:09:22.534 14:07:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:22.534 14:07:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:22.534 14:07:27 json_config -- json_config/json_config.sh@62 -- # return 0 00:09:22.534 14:07:27 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:09:22.534 14:07:27 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:09:22.534 14:07:27 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:09:22.534 14:07:27 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:09:22.534 14:07:27 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:09:22.534 14:07:27 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:09:22.534 14:07:27 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:22.534 14:07:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:22.795 14:07:27 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:09:22.795 14:07:27 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:09:22.795 14:07:27 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:09:22.795 14:07:27 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:22.795 14:07:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:22.795 MallocForNvmf0 00:09:22.795 14:07:27 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:22.795 14:07:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:23.054 MallocForNvmf1 00:09:23.054 14:07:27 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:09:23.054 14:07:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:09:23.054 [2024-11-25 14:07:28.125930] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.315 14:07:28 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:23.315 14:07:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:23.315 14:07:28 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:23.315 14:07:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:23.576 14:07:28 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:23.576 14:07:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:23.836 14:07:28 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:23.836 14:07:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:23.836 [2024-11-25 14:07:28.848119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:23.836 14:07:28 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:09:23.836 14:07:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:23.836 14:07:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:23.836 14:07:28 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:09:23.836 14:07:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:23.836 14:07:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:24.096 14:07:28 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:09:24.096 14:07:28 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:24.096 14:07:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:24.096 MallocBdevForConfigChangeCheck 00:09:24.096 14:07:29 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:09:24.096 14:07:29 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:24.096 14:07:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:24.096 14:07:29 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:09:24.096 14:07:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:24.665 14:07:29 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:09:24.665 INFO: shutting down applications... 00:09:24.665 14:07:29 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:09:24.665 14:07:29 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:09:24.665 14:07:29 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:09:24.665 14:07:29 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:24.926 Calling clear_iscsi_subsystem 00:09:24.926 Calling clear_nvmf_subsystem 00:09:24.926 Calling clear_nbd_subsystem 00:09:24.926 Calling clear_ublk_subsystem 00:09:24.926 Calling clear_vhost_blk_subsystem 00:09:24.926 Calling clear_vhost_scsi_subsystem 00:09:24.926 Calling clear_bdev_subsystem 00:09:24.926 14:07:29 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:09:24.926 14:07:29 json_config -- json_config/json_config.sh@350 -- # count=100 00:09:24.926 14:07:29 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:09:24.926 14:07:29 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:24.926 14:07:29 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:24.927 14:07:29 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:09:25.187 14:07:30 json_config -- json_config/json_config.sh@352 -- # break 00:09:25.187 14:07:30 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:09:25.187 14:07:30 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:09:25.187 14:07:30 json_config -- json_config/common.sh@31 -- # local app=target 00:09:25.187 14:07:30 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:25.187 14:07:30 json_config -- json_config/common.sh@35 -- # [[ -n 3196740 ]] 00:09:25.187 14:07:30 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3196740 00:09:25.187 14:07:30 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:25.187 14:07:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:25.187 14:07:30 json_config -- json_config/common.sh@41 -- # kill -0 3196740 00:09:25.187 14:07:30 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:25.757 14:07:30 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:25.757 14:07:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:25.757 14:07:30 json_config -- json_config/common.sh@41 -- # kill -0 3196740 00:09:25.757 14:07:30 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:25.757 14:07:30 json_config -- json_config/common.sh@43 -- # break 00:09:25.757 14:07:30 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:25.757 14:07:30 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:25.757 SPDK target shutdown done 00:09:25.757 14:07:30 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:09:25.757 INFO: relaunching applications... 00:09:25.757 14:07:30 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:25.757 14:07:30 json_config -- json_config/common.sh@9 -- # local app=target 00:09:25.757 14:07:30 json_config -- json_config/common.sh@10 -- # shift 00:09:25.757 14:07:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:25.757 14:07:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:25.757 14:07:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:25.757 14:07:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:25.757 14:07:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:25.757 14:07:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3197873 00:09:25.757 14:07:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:25.757 Waiting for target to run... 00:09:25.757 14:07:30 json_config -- json_config/common.sh@25 -- # waitforlisten 3197873 /var/tmp/spdk_tgt.sock 00:09:25.757 14:07:30 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:25.757 14:07:30 json_config -- common/autotest_common.sh@835 -- # '[' -z 3197873 ']' 00:09:25.757 14:07:30 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:25.757 14:07:30 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.757 14:07:30 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:25.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:25.757 14:07:30 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.757 14:07:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:26.018 [2024-11-25 14:07:30.852008] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:09:26.018 [2024-11-25 14:07:30.852070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3197873 ] 00:09:26.280 [2024-11-25 14:07:31.209856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.280 [2024-11-25 14:07:31.237048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.852 [2024-11-25 14:07:31.739586] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.852 [2024-11-25 14:07:31.771935] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:26.852 14:07:31 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.852 14:07:31 json_config -- common/autotest_common.sh@868 -- # return 0 00:09:26.852 14:07:31 json_config -- json_config/common.sh@26 -- # echo '' 00:09:26.852 00:09:26.852 14:07:31 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:09:26.852 14:07:31 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:26.852 INFO: Checking if target configuration is the same... 00:09:26.852 14:07:31 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:26.852 14:07:31 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:09:26.852 14:07:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:26.852 + '[' 2 -ne 2 ']' 00:09:26.852 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:09:26.852 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:09:26.852 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:26.852 +++ basename /dev/fd/62 00:09:26.852 ++ mktemp /tmp/62.XXX 00:09:26.852 + tmp_file_1=/tmp/62.glD 00:09:26.852 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:26.852 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:26.852 + tmp_file_2=/tmp/spdk_tgt_config.json.Hu8 00:09:26.852 + ret=0 00:09:26.852 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:27.113 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:27.113 + diff -u /tmp/62.glD /tmp/spdk_tgt_config.json.Hu8 00:09:27.113 + echo 'INFO: JSON config files are the same' 00:09:27.113 INFO: JSON config files are the same 00:09:27.113 + rm /tmp/62.glD /tmp/spdk_tgt_config.json.Hu8 00:09:27.113 + exit 0 00:09:27.113 14:07:32 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:09:27.113 14:07:32 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:27.113 INFO: changing configuration and checking if this can be detected... 00:09:27.113 14:07:32 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:27.113 14:07:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:27.375 14:07:32 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:27.375 14:07:32 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:09:27.375 14:07:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:27.375 + '[' 2 -ne 2 ']' 00:09:27.375 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:09:27.375 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:09:27.375 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:27.375 +++ basename /dev/fd/62 00:09:27.375 ++ mktemp /tmp/62.XXX 00:09:27.375 + tmp_file_1=/tmp/62.cXf 00:09:27.375 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:27.375 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:27.375 + tmp_file_2=/tmp/spdk_tgt_config.json.eg2 00:09:27.375 + ret=0 00:09:27.375 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:27.636 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:27.897 + diff -u /tmp/62.cXf /tmp/spdk_tgt_config.json.eg2 00:09:27.897 + ret=1 00:09:27.897 + echo '=== Start of file: /tmp/62.cXf ===' 00:09:27.897 + cat /tmp/62.cXf 00:09:27.897 + echo '=== End of file: /tmp/62.cXf ===' 00:09:27.897 + echo '' 00:09:27.897 + echo '=== Start of file: /tmp/spdk_tgt_config.json.eg2 ===' 00:09:27.897 + cat /tmp/spdk_tgt_config.json.eg2 00:09:27.897 + echo '=== End of file: /tmp/spdk_tgt_config.json.eg2 ===' 00:09:27.897 + echo '' 00:09:27.897 + rm /tmp/62.cXf /tmp/spdk_tgt_config.json.eg2 00:09:27.897 + exit 1 00:09:27.897 14:07:32 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:09:27.897 INFO: configuration change detected. 00:09:27.897 14:07:32 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:09:27.897 14:07:32 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:09:27.897 14:07:32 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:27.897 14:07:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:27.897 14:07:32 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:09:27.897 14:07:32 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:09:27.897 14:07:32 json_config -- json_config/json_config.sh@324 -- # [[ -n 3197873 ]] 00:09:27.897 14:07:32 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:09:27.897 14:07:32 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:09:27.897 14:07:32 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:27.897 14:07:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:27.897 14:07:32 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:09:27.897 14:07:32 json_config -- json_config/json_config.sh@200 -- # uname -s 00:09:27.897 14:07:32 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:09:27.897 14:07:32 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:09:27.897 14:07:32 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:09:27.897 14:07:32 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:09:27.897 14:07:32 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:27.897 14:07:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:27.897 14:07:32 json_config -- json_config/json_config.sh@330 -- # killprocess 3197873 00:09:27.897 14:07:32 json_config -- common/autotest_common.sh@954 -- # '[' -z 3197873 ']' 00:09:27.897 14:07:32 json_config -- common/autotest_common.sh@958 -- # kill -0 3197873 00:09:27.897 14:07:32 json_config -- common/autotest_common.sh@959 -- # uname 00:09:27.897 14:07:32 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.897 14:07:32 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3197873 00:09:27.897 14:07:32 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:27.897 14:07:32 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:27.897 14:07:32 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3197873' 00:09:27.897 killing process with pid 3197873 00:09:27.897 14:07:32 json_config -- common/autotest_common.sh@973 -- # kill 3197873 00:09:27.897 14:07:32 json_config -- common/autotest_common.sh@978 -- # wait 3197873 00:09:28.158 14:07:33 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:28.158 14:07:33 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:09:28.158 14:07:33 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:28.158 14:07:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:28.158 14:07:33 json_config -- json_config/json_config.sh@335 -- # return 0 00:09:28.158 14:07:33 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:09:28.158 INFO: Success 00:09:28.158 00:09:28.158 real 0m7.443s 00:09:28.158 user 0m8.894s 00:09:28.158 sys 0m2.064s 00:09:28.158 14:07:33 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.158 14:07:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:28.158 ************************************ 00:09:28.158 END TEST json_config 00:09:28.158 ************************************ 00:09:28.158 14:07:33 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:09:28.158 14:07:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.158 14:07:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.158 14:07:33 -- common/autotest_common.sh@10 -- # set +x 00:09:28.158 ************************************ 00:09:28.158 START TEST json_config_extra_key 00:09:28.158 ************************************ 00:09:28.158 14:07:33 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:09:28.421 14:07:33 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:28.421 14:07:33 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:09:28.421 14:07:33 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:28.421 14:07:33 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.421 14:07:33 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:09:28.421 14:07:33 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.421 14:07:33 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:28.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.421 --rc genhtml_branch_coverage=1 00:09:28.421 --rc genhtml_function_coverage=1 00:09:28.421 --rc genhtml_legend=1 00:09:28.421 --rc geninfo_all_blocks=1 00:09:28.421 --rc geninfo_unexecuted_blocks=1 00:09:28.421 00:09:28.421 ' 00:09:28.421 14:07:33 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:28.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.421 --rc genhtml_branch_coverage=1 00:09:28.421 --rc genhtml_function_coverage=1 00:09:28.421 --rc genhtml_legend=1 00:09:28.421 --rc geninfo_all_blocks=1 00:09:28.421 --rc geninfo_unexecuted_blocks=1 00:09:28.421 00:09:28.421 ' 00:09:28.421 14:07:33 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:28.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.421 --rc genhtml_branch_coverage=1 00:09:28.421 --rc genhtml_function_coverage=1 00:09:28.421 --rc genhtml_legend=1 00:09:28.421 --rc geninfo_all_blocks=1 00:09:28.421 --rc geninfo_unexecuted_blocks=1 00:09:28.421 00:09:28.421 ' 00:09:28.421 14:07:33 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:28.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.422 --rc genhtml_branch_coverage=1 00:09:28.422 --rc genhtml_function_coverage=1 00:09:28.422 --rc genhtml_legend=1 00:09:28.422 --rc geninfo_all_blocks=1 00:09:28.422 --rc geninfo_unexecuted_blocks=1 00:09:28.422 00:09:28.422 ' 00:09:28.422 14:07:33 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:28.422 14:07:33 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:09:28.422 14:07:33 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.422 14:07:33 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.422 14:07:33 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.422 14:07:33 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.422 14:07:33 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.422 14:07:33 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.422 14:07:33 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:28.422 14:07:33 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:28.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:28.422 14:07:33 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:28.422 14:07:33 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:09:28.422 14:07:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:28.422 14:07:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:28.422 14:07:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:28.422 14:07:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:28.422 14:07:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:28.422 14:07:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:28.422 14:07:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:09:28.422 14:07:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:28.422 14:07:33 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:28.422 14:07:33 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:28.422 INFO: launching applications... 00:09:28.422 14:07:33 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:09:28.422 14:07:33 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:28.422 14:07:33 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:28.422 14:07:33 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:28.422 14:07:33 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:28.422 14:07:33 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:28.422 14:07:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:28.422 14:07:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:28.422 14:07:33 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3198573 00:09:28.422 14:07:33 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:28.422 Waiting for target to run... 00:09:28.422 14:07:33 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3198573 /var/tmp/spdk_tgt.sock 00:09:28.422 14:07:33 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3198573 ']' 00:09:28.422 14:07:33 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:28.422 14:07:33 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:09:28.422 14:07:33 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.422 14:07:33 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:28.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:28.422 14:07:33 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.422 14:07:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:28.684 [2024-11-25 14:07:33.519692] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:09:28.684 [2024-11-25 14:07:33.519767] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3198573 ] 00:09:28.945 [2024-11-25 14:07:33.848618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.945 [2024-11-25 14:07:33.880234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.515 14:07:34 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.516 14:07:34 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:09:29.516 14:07:34 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:29.516 00:09:29.516 14:07:34 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:29.516 INFO: shutting down applications... 00:09:29.516 14:07:34 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:29.516 14:07:34 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:29.516 14:07:34 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:29.516 14:07:34 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3198573 ]] 00:09:29.516 14:07:34 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3198573 00:09:29.516 14:07:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:29.516 14:07:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:29.516 14:07:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3198573 00:09:29.516 14:07:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:29.776 14:07:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:29.776 14:07:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:29.776 14:07:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3198573 00:09:29.776 14:07:34 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:29.777 14:07:34 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:29.777 14:07:34 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:29.777 14:07:34 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:29.777 SPDK target shutdown done 00:09:29.777 14:07:34 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:29.777 Success 00:09:29.777 00:09:29.777 real 0m1.574s 00:09:29.777 user 0m1.131s 00:09:29.777 sys 0m0.462s 00:09:29.777 14:07:34 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.777 14:07:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:29.777 ************************************ 00:09:29.777 END TEST json_config_extra_key 00:09:29.777 ************************************ 00:09:29.777 14:07:34 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:29.777 14:07:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.777 14:07:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.777 14:07:34 -- common/autotest_common.sh@10 -- # set +x 00:09:30.038 ************************************ 00:09:30.038 START TEST alias_rpc 00:09:30.038 ************************************ 00:09:30.038 14:07:34 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:30.038 * Looking for test storage... 00:09:30.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:09:30.038 14:07:34 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:30.038 14:07:34 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:30.038 14:07:34 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:30.038 14:07:35 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@345 -- # : 1 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.038 14:07:35 alias_rpc -- scripts/common.sh@368 -- # return 0 00:09:30.038 14:07:35 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.038 14:07:35 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:30.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.038 --rc genhtml_branch_coverage=1 00:09:30.038 --rc genhtml_function_coverage=1 00:09:30.038 --rc genhtml_legend=1 00:09:30.038 --rc geninfo_all_blocks=1 00:09:30.038 --rc geninfo_unexecuted_blocks=1 00:09:30.038 00:09:30.038 ' 00:09:30.038 14:07:35 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:30.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.039 --rc genhtml_branch_coverage=1 00:09:30.039 --rc genhtml_function_coverage=1 00:09:30.039 --rc genhtml_legend=1 00:09:30.039 --rc geninfo_all_blocks=1 00:09:30.039 --rc geninfo_unexecuted_blocks=1 00:09:30.039 00:09:30.039 ' 00:09:30.039 14:07:35 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:30.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.039 --rc genhtml_branch_coverage=1 00:09:30.039 --rc genhtml_function_coverage=1 00:09:30.039 --rc genhtml_legend=1 00:09:30.039 --rc geninfo_all_blocks=1 00:09:30.039 --rc geninfo_unexecuted_blocks=1 00:09:30.039 00:09:30.039 ' 00:09:30.039 14:07:35 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:30.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.039 --rc genhtml_branch_coverage=1 00:09:30.039 --rc genhtml_function_coverage=1 00:09:30.039 --rc genhtml_legend=1 00:09:30.039 --rc geninfo_all_blocks=1 00:09:30.039 --rc geninfo_unexecuted_blocks=1 00:09:30.039 00:09:30.039 ' 00:09:30.039 14:07:35 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:30.039 14:07:35 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3198928 00:09:30.039 14:07:35 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3198928 00:09:30.039 14:07:35 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:30.039 14:07:35 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3198928 ']' 00:09:30.039 14:07:35 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.039 14:07:35 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.039 14:07:35 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.039 14:07:35 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.039 14:07:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.300 [2024-11-25 14:07:35.160075] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:09:30.300 [2024-11-25 14:07:35.160149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3198928 ] 00:09:30.300 [2024-11-25 14:07:35.249337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.300 [2024-11-25 14:07:35.283555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.872 14:07:35 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.872 14:07:35 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:30.872 14:07:35 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:09:31.132 14:07:36 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3198928 00:09:31.132 14:07:36 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3198928 ']' 00:09:31.132 14:07:36 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3198928 00:09:31.132 14:07:36 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:09:31.132 14:07:36 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.132 14:07:36 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3198928 00:09:31.132 14:07:36 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.132 14:07:36 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.132 14:07:36 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3198928' 00:09:31.132 killing process with pid 3198928 00:09:31.132 14:07:36 alias_rpc -- common/autotest_common.sh@973 -- # kill 3198928 00:09:31.132 14:07:36 alias_rpc -- common/autotest_common.sh@978 -- # wait 3198928 00:09:31.392 00:09:31.392 real 0m1.488s 00:09:31.392 user 0m1.627s 00:09:31.392 sys 0m0.422s 00:09:31.392 14:07:36 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.392 14:07:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.392 ************************************ 00:09:31.392 END TEST alias_rpc 00:09:31.392 ************************************ 00:09:31.392 14:07:36 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:09:31.392 14:07:36 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:31.392 14:07:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:31.392 14:07:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.392 14:07:36 -- common/autotest_common.sh@10 -- # set +x 00:09:31.392 ************************************ 00:09:31.392 START TEST spdkcli_tcp 00:09:31.392 ************************************ 00:09:31.392 14:07:36 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:31.654 * Looking for test storage... 00:09:31.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:09:31.654 14:07:36 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:31.654 14:07:36 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:31.654 14:07:36 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:31.654 14:07:36 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.654 14:07:36 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:09:31.654 14:07:36 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.654 14:07:36 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:31.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.654 --rc genhtml_branch_coverage=1 00:09:31.654 --rc genhtml_function_coverage=1 00:09:31.654 --rc genhtml_legend=1 00:09:31.654 --rc geninfo_all_blocks=1 00:09:31.654 --rc geninfo_unexecuted_blocks=1 00:09:31.654 00:09:31.654 ' 00:09:31.654 14:07:36 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:31.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.654 --rc genhtml_branch_coverage=1 00:09:31.654 --rc genhtml_function_coverage=1 00:09:31.654 --rc genhtml_legend=1 00:09:31.654 --rc geninfo_all_blocks=1 00:09:31.654 --rc geninfo_unexecuted_blocks=1 00:09:31.654 00:09:31.654 ' 00:09:31.654 14:07:36 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:31.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.654 --rc genhtml_branch_coverage=1 00:09:31.654 --rc genhtml_function_coverage=1 00:09:31.654 --rc genhtml_legend=1 00:09:31.654 --rc geninfo_all_blocks=1 00:09:31.654 --rc geninfo_unexecuted_blocks=1 00:09:31.654 00:09:31.654 ' 00:09:31.654 14:07:36 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:31.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.654 --rc genhtml_branch_coverage=1 00:09:31.654 --rc genhtml_function_coverage=1 00:09:31.654 --rc genhtml_legend=1 00:09:31.654 --rc geninfo_all_blocks=1 00:09:31.654 --rc geninfo_unexecuted_blocks=1 00:09:31.654 00:09:31.654 ' 00:09:31.654 14:07:36 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:09:31.654 14:07:36 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:09:31.654 14:07:36 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:09:31.654 14:07:36 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:31.654 14:07:36 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:31.654 14:07:36 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:31.654 14:07:36 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:31.654 14:07:36 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:31.654 14:07:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:31.654 14:07:36 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3199257 00:09:31.654 14:07:36 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3199257 00:09:31.654 14:07:36 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:31.654 14:07:36 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3199257 ']' 00:09:31.655 14:07:36 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.655 14:07:36 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.655 14:07:36 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.655 14:07:36 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.655 14:07:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:31.655 [2024-11-25 14:07:36.727658] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:09:31.655 [2024-11-25 14:07:36.727731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3199257 ] 00:09:31.946 [2024-11-25 14:07:36.814681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:31.946 [2024-11-25 14:07:36.850844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.946 [2024-11-25 14:07:36.850846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.519 14:07:37 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.519 14:07:37 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:09:32.519 14:07:37 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3199472 00:09:32.519 14:07:37 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:32.519 14:07:37 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:32.779 [ 00:09:32.779 "bdev_malloc_delete", 00:09:32.779 "bdev_malloc_create", 00:09:32.779 "bdev_null_resize", 00:09:32.779 "bdev_null_delete", 00:09:32.779 "bdev_null_create", 00:09:32.779 "bdev_nvme_cuse_unregister", 00:09:32.779 "bdev_nvme_cuse_register", 00:09:32.779 "bdev_opal_new_user", 00:09:32.779 "bdev_opal_set_lock_state", 00:09:32.779 "bdev_opal_delete", 00:09:32.779 "bdev_opal_get_info", 00:09:32.779 "bdev_opal_create", 00:09:32.779 "bdev_nvme_opal_revert", 00:09:32.779 "bdev_nvme_opal_init", 00:09:32.779 "bdev_nvme_send_cmd", 00:09:32.779 "bdev_nvme_set_keys", 00:09:32.779 "bdev_nvme_get_path_iostat", 00:09:32.779 "bdev_nvme_get_mdns_discovery_info", 00:09:32.779 "bdev_nvme_stop_mdns_discovery", 00:09:32.779 "bdev_nvme_start_mdns_discovery", 00:09:32.779 "bdev_nvme_set_multipath_policy", 00:09:32.779 "bdev_nvme_set_preferred_path", 00:09:32.779 "bdev_nvme_get_io_paths", 00:09:32.779 "bdev_nvme_remove_error_injection", 00:09:32.779 "bdev_nvme_add_error_injection", 00:09:32.779 "bdev_nvme_get_discovery_info", 00:09:32.779 "bdev_nvme_stop_discovery", 00:09:32.779 "bdev_nvme_start_discovery", 00:09:32.779 "bdev_nvme_get_controller_health_info", 00:09:32.779 "bdev_nvme_disable_controller", 00:09:32.779 "bdev_nvme_enable_controller", 00:09:32.779 "bdev_nvme_reset_controller", 00:09:32.779 "bdev_nvme_get_transport_statistics", 00:09:32.779 "bdev_nvme_apply_firmware", 00:09:32.779 "bdev_nvme_detach_controller", 00:09:32.779 "bdev_nvme_get_controllers", 00:09:32.779 "bdev_nvme_attach_controller", 00:09:32.779 "bdev_nvme_set_hotplug", 00:09:32.779 "bdev_nvme_set_options", 00:09:32.779 "bdev_passthru_delete", 00:09:32.779 "bdev_passthru_create", 00:09:32.779 "bdev_lvol_set_parent_bdev", 00:09:32.779 "bdev_lvol_set_parent", 00:09:32.779 "bdev_lvol_check_shallow_copy", 00:09:32.779 "bdev_lvol_start_shallow_copy", 00:09:32.779 "bdev_lvol_grow_lvstore", 00:09:32.779 "bdev_lvol_get_lvols", 00:09:32.779 "bdev_lvol_get_lvstores", 00:09:32.779 "bdev_lvol_delete", 00:09:32.779 "bdev_lvol_set_read_only", 00:09:32.779 "bdev_lvol_resize", 00:09:32.779 "bdev_lvol_decouple_parent", 00:09:32.779 "bdev_lvol_inflate", 00:09:32.779 "bdev_lvol_rename", 00:09:32.779 "bdev_lvol_clone_bdev", 00:09:32.779 "bdev_lvol_clone", 00:09:32.779 "bdev_lvol_snapshot", 00:09:32.779 "bdev_lvol_create", 00:09:32.779 "bdev_lvol_delete_lvstore", 00:09:32.780 "bdev_lvol_rename_lvstore", 00:09:32.780 "bdev_lvol_create_lvstore", 00:09:32.780 "bdev_raid_set_options", 00:09:32.780 "bdev_raid_remove_base_bdev", 00:09:32.780 "bdev_raid_add_base_bdev", 00:09:32.780 "bdev_raid_delete", 00:09:32.780 "bdev_raid_create", 00:09:32.780 "bdev_raid_get_bdevs", 00:09:32.780 "bdev_error_inject_error", 00:09:32.780 "bdev_error_delete", 00:09:32.780 "bdev_error_create", 00:09:32.780 "bdev_split_delete", 00:09:32.780 "bdev_split_create", 00:09:32.780 "bdev_delay_delete", 00:09:32.780 "bdev_delay_create", 00:09:32.780 "bdev_delay_update_latency", 00:09:32.780 "bdev_zone_block_delete", 00:09:32.780 "bdev_zone_block_create", 00:09:32.780 "blobfs_create", 00:09:32.780 "blobfs_detect", 00:09:32.780 "blobfs_set_cache_size", 00:09:32.780 "bdev_aio_delete", 00:09:32.780 "bdev_aio_rescan", 00:09:32.780 "bdev_aio_create", 00:09:32.780 "bdev_ftl_set_property", 00:09:32.780 "bdev_ftl_get_properties", 00:09:32.780 "bdev_ftl_get_stats", 00:09:32.780 "bdev_ftl_unmap", 00:09:32.780 "bdev_ftl_unload", 00:09:32.780 "bdev_ftl_delete", 00:09:32.780 "bdev_ftl_load", 00:09:32.780 "bdev_ftl_create", 00:09:32.780 "bdev_virtio_attach_controller", 00:09:32.780 "bdev_virtio_scsi_get_devices", 00:09:32.780 "bdev_virtio_detach_controller", 00:09:32.780 "bdev_virtio_blk_set_hotplug", 00:09:32.780 "bdev_iscsi_delete", 00:09:32.780 "bdev_iscsi_create", 00:09:32.780 "bdev_iscsi_set_options", 00:09:32.780 "accel_error_inject_error", 00:09:32.780 "ioat_scan_accel_module", 00:09:32.780 "dsa_scan_accel_module", 00:09:32.780 "iaa_scan_accel_module", 00:09:32.780 "vfu_virtio_create_fs_endpoint", 00:09:32.780 "vfu_virtio_create_scsi_endpoint", 00:09:32.780 "vfu_virtio_scsi_remove_target", 00:09:32.780 "vfu_virtio_scsi_add_target", 00:09:32.780 "vfu_virtio_create_blk_endpoint", 00:09:32.780 "vfu_virtio_delete_endpoint", 00:09:32.780 "keyring_file_remove_key", 00:09:32.780 "keyring_file_add_key", 00:09:32.780 "keyring_linux_set_options", 00:09:32.780 "fsdev_aio_delete", 00:09:32.780 "fsdev_aio_create", 00:09:32.780 "iscsi_get_histogram", 00:09:32.780 "iscsi_enable_histogram", 00:09:32.780 "iscsi_set_options", 00:09:32.780 "iscsi_get_auth_groups", 00:09:32.780 "iscsi_auth_group_remove_secret", 00:09:32.780 "iscsi_auth_group_add_secret", 00:09:32.780 "iscsi_delete_auth_group", 00:09:32.780 "iscsi_create_auth_group", 00:09:32.780 "iscsi_set_discovery_auth", 00:09:32.780 "iscsi_get_options", 00:09:32.780 "iscsi_target_node_request_logout", 00:09:32.780 "iscsi_target_node_set_redirect", 00:09:32.780 "iscsi_target_node_set_auth", 00:09:32.780 "iscsi_target_node_add_lun", 00:09:32.780 "iscsi_get_stats", 00:09:32.780 "iscsi_get_connections", 00:09:32.780 "iscsi_portal_group_set_auth", 00:09:32.780 "iscsi_start_portal_group", 00:09:32.780 "iscsi_delete_portal_group", 00:09:32.780 "iscsi_create_portal_group", 00:09:32.780 "iscsi_get_portal_groups", 00:09:32.780 "iscsi_delete_target_node", 00:09:32.780 "iscsi_target_node_remove_pg_ig_maps", 00:09:32.780 "iscsi_target_node_add_pg_ig_maps", 00:09:32.780 "iscsi_create_target_node", 00:09:32.780 "iscsi_get_target_nodes", 00:09:32.780 "iscsi_delete_initiator_group", 00:09:32.780 "iscsi_initiator_group_remove_initiators", 00:09:32.780 "iscsi_initiator_group_add_initiators", 00:09:32.780 "iscsi_create_initiator_group", 00:09:32.780 "iscsi_get_initiator_groups", 00:09:32.780 "nvmf_set_crdt", 00:09:32.780 "nvmf_set_config", 00:09:32.780 "nvmf_set_max_subsystems", 00:09:32.780 "nvmf_stop_mdns_prr", 00:09:32.780 "nvmf_publish_mdns_prr", 00:09:32.780 "nvmf_subsystem_get_listeners", 00:09:32.780 "nvmf_subsystem_get_qpairs", 00:09:32.780 "nvmf_subsystem_get_controllers", 00:09:32.780 "nvmf_get_stats", 00:09:32.780 "nvmf_get_transports", 00:09:32.780 "nvmf_create_transport", 00:09:32.780 "nvmf_get_targets", 00:09:32.780 "nvmf_delete_target", 00:09:32.780 "nvmf_create_target", 00:09:32.780 "nvmf_subsystem_allow_any_host", 00:09:32.780 "nvmf_subsystem_set_keys", 00:09:32.780 "nvmf_subsystem_remove_host", 00:09:32.780 "nvmf_subsystem_add_host", 00:09:32.780 "nvmf_ns_remove_host", 00:09:32.780 "nvmf_ns_add_host", 00:09:32.780 "nvmf_subsystem_remove_ns", 00:09:32.780 "nvmf_subsystem_set_ns_ana_group", 00:09:32.780 "nvmf_subsystem_add_ns", 00:09:32.780 "nvmf_subsystem_listener_set_ana_state", 00:09:32.780 "nvmf_discovery_get_referrals", 00:09:32.780 "nvmf_discovery_remove_referral", 00:09:32.780 "nvmf_discovery_add_referral", 00:09:32.780 "nvmf_subsystem_remove_listener", 00:09:32.780 "nvmf_subsystem_add_listener", 00:09:32.780 "nvmf_delete_subsystem", 00:09:32.780 "nvmf_create_subsystem", 00:09:32.780 "nvmf_get_subsystems", 00:09:32.780 "env_dpdk_get_mem_stats", 00:09:32.780 "nbd_get_disks", 00:09:32.780 "nbd_stop_disk", 00:09:32.780 "nbd_start_disk", 00:09:32.780 "ublk_recover_disk", 00:09:32.780 "ublk_get_disks", 00:09:32.780 "ublk_stop_disk", 00:09:32.780 "ublk_start_disk", 00:09:32.780 "ublk_destroy_target", 00:09:32.780 "ublk_create_target", 00:09:32.780 "virtio_blk_create_transport", 00:09:32.780 "virtio_blk_get_transports", 00:09:32.780 "vhost_controller_set_coalescing", 00:09:32.780 "vhost_get_controllers", 00:09:32.780 "vhost_delete_controller", 00:09:32.780 "vhost_create_blk_controller", 00:09:32.780 "vhost_scsi_controller_remove_target", 00:09:32.780 "vhost_scsi_controller_add_target", 00:09:32.780 "vhost_start_scsi_controller", 00:09:32.780 "vhost_create_scsi_controller", 00:09:32.780 "thread_set_cpumask", 00:09:32.780 "scheduler_set_options", 00:09:32.780 "framework_get_governor", 00:09:32.780 "framework_get_scheduler", 00:09:32.780 "framework_set_scheduler", 00:09:32.780 "framework_get_reactors", 00:09:32.780 "thread_get_io_channels", 00:09:32.780 "thread_get_pollers", 00:09:32.780 "thread_get_stats", 00:09:32.780 "framework_monitor_context_switch", 00:09:32.780 "spdk_kill_instance", 00:09:32.780 "log_enable_timestamps", 00:09:32.780 "log_get_flags", 00:09:32.780 "log_clear_flag", 00:09:32.780 "log_set_flag", 00:09:32.780 "log_get_level", 00:09:32.780 "log_set_level", 00:09:32.780 "log_get_print_level", 00:09:32.780 "log_set_print_level", 00:09:32.780 "framework_enable_cpumask_locks", 00:09:32.780 "framework_disable_cpumask_locks", 00:09:32.780 "framework_wait_init", 00:09:32.780 "framework_start_init", 00:09:32.780 "scsi_get_devices", 00:09:32.780 "bdev_get_histogram", 00:09:32.780 "bdev_enable_histogram", 00:09:32.780 "bdev_set_qos_limit", 00:09:32.780 "bdev_set_qd_sampling_period", 00:09:32.780 "bdev_get_bdevs", 00:09:32.780 "bdev_reset_iostat", 00:09:32.780 "bdev_get_iostat", 00:09:32.780 "bdev_examine", 00:09:32.780 "bdev_wait_for_examine", 00:09:32.780 "bdev_set_options", 00:09:32.780 "accel_get_stats", 00:09:32.780 "accel_set_options", 00:09:32.780 "accel_set_driver", 00:09:32.780 "accel_crypto_key_destroy", 00:09:32.780 "accel_crypto_keys_get", 00:09:32.780 "accel_crypto_key_create", 00:09:32.780 "accel_assign_opc", 00:09:32.780 "accel_get_module_info", 00:09:32.781 "accel_get_opc_assignments", 00:09:32.781 "vmd_rescan", 00:09:32.781 "vmd_remove_device", 00:09:32.781 "vmd_enable", 00:09:32.781 "sock_get_default_impl", 00:09:32.781 "sock_set_default_impl", 00:09:32.781 "sock_impl_set_options", 00:09:32.781 "sock_impl_get_options", 00:09:32.781 "iobuf_get_stats", 00:09:32.781 "iobuf_set_options", 00:09:32.781 "keyring_get_keys", 00:09:32.781 "vfu_tgt_set_base_path", 00:09:32.781 "framework_get_pci_devices", 00:09:32.781 "framework_get_config", 00:09:32.781 "framework_get_subsystems", 00:09:32.781 "fsdev_set_opts", 00:09:32.781 "fsdev_get_opts", 00:09:32.781 "trace_get_info", 00:09:32.781 "trace_get_tpoint_group_mask", 00:09:32.781 "trace_disable_tpoint_group", 00:09:32.781 "trace_enable_tpoint_group", 00:09:32.781 "trace_clear_tpoint_mask", 00:09:32.781 "trace_set_tpoint_mask", 00:09:32.781 "notify_get_notifications", 00:09:32.781 "notify_get_types", 00:09:32.781 "spdk_get_version", 00:09:32.781 "rpc_get_methods" 00:09:32.781 ] 00:09:32.781 14:07:37 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:32.781 14:07:37 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:32.781 14:07:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:32.781 14:07:37 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:32.781 14:07:37 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3199257 00:09:32.781 14:07:37 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3199257 ']' 00:09:32.781 14:07:37 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3199257 00:09:32.781 14:07:37 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:09:32.781 14:07:37 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:32.781 14:07:37 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3199257 00:09:32.781 14:07:37 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:32.781 14:07:37 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:32.781 14:07:37 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3199257' 00:09:32.781 killing process with pid 3199257 00:09:32.781 14:07:37 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3199257 00:09:32.781 14:07:37 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3199257 00:09:33.042 00:09:33.042 real 0m1.534s 00:09:33.042 user 0m2.773s 00:09:33.042 sys 0m0.484s 00:09:33.042 14:07:37 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.042 14:07:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:33.042 ************************************ 00:09:33.042 END TEST spdkcli_tcp 00:09:33.042 ************************************ 00:09:33.042 14:07:38 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:33.042 14:07:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.042 14:07:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.042 14:07:38 -- common/autotest_common.sh@10 -- # set +x 00:09:33.042 ************************************ 00:09:33.042 START TEST dpdk_mem_utility 00:09:33.042 ************************************ 00:09:33.042 14:07:38 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:33.303 * Looking for test storage... 00:09:33.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:09:33.303 14:07:38 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:33.303 14:07:38 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:09:33.303 14:07:38 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:33.303 14:07:38 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.303 14:07:38 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:09:33.303 14:07:38 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.303 14:07:38 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:33.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.303 --rc genhtml_branch_coverage=1 00:09:33.303 --rc genhtml_function_coverage=1 00:09:33.303 --rc genhtml_legend=1 00:09:33.303 --rc geninfo_all_blocks=1 00:09:33.303 --rc geninfo_unexecuted_blocks=1 00:09:33.303 00:09:33.303 ' 00:09:33.303 14:07:38 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:33.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.303 --rc genhtml_branch_coverage=1 00:09:33.303 --rc genhtml_function_coverage=1 00:09:33.303 --rc genhtml_legend=1 00:09:33.303 --rc geninfo_all_blocks=1 00:09:33.303 --rc geninfo_unexecuted_blocks=1 00:09:33.303 00:09:33.303 ' 00:09:33.303 14:07:38 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:33.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.303 --rc genhtml_branch_coverage=1 00:09:33.303 --rc genhtml_function_coverage=1 00:09:33.303 --rc genhtml_legend=1 00:09:33.303 --rc geninfo_all_blocks=1 00:09:33.303 --rc geninfo_unexecuted_blocks=1 00:09:33.303 00:09:33.303 ' 00:09:33.303 14:07:38 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:33.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.303 --rc genhtml_branch_coverage=1 00:09:33.303 --rc genhtml_function_coverage=1 00:09:33.303 --rc genhtml_legend=1 00:09:33.303 --rc geninfo_all_blocks=1 00:09:33.303 --rc geninfo_unexecuted_blocks=1 00:09:33.303 00:09:33.303 ' 00:09:33.303 14:07:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:33.303 14:07:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3199618 00:09:33.303 14:07:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3199618 00:09:33.303 14:07:38 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3199618 ']' 00:09:33.303 14:07:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:33.303 14:07:38 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.303 14:07:38 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:33.303 14:07:38 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.303 14:07:38 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:33.303 14:07:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:33.303 [2024-11-25 14:07:38.326611] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:09:33.303 [2024-11-25 14:07:38.326687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3199618 ] 00:09:33.564 [2024-11-25 14:07:38.417109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.564 [2024-11-25 14:07:38.458656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.135 14:07:39 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.135 14:07:39 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:09:34.135 14:07:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:34.135 14:07:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:34.135 14:07:39 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.135 14:07:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:34.135 { 00:09:34.135 "filename": "/tmp/spdk_mem_dump.txt" 00:09:34.135 } 00:09:34.135 14:07:39 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.135 14:07:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:34.135 DPDK memory size 810.000000 MiB in 1 heap(s) 00:09:34.135 1 heaps totaling size 810.000000 MiB 00:09:34.135 size: 810.000000 MiB heap id: 0 00:09:34.135 end heaps---------- 00:09:34.135 9 mempools totaling size 595.772034 MiB 00:09:34.135 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:34.135 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:34.135 size: 92.545471 MiB name: bdev_io_3199618 00:09:34.135 size: 50.003479 MiB name: msgpool_3199618 00:09:34.135 size: 36.509338 MiB name: fsdev_io_3199618 00:09:34.135 size: 21.763794 MiB name: PDU_Pool 00:09:34.135 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:34.135 size: 4.133484 MiB name: evtpool_3199618 00:09:34.135 size: 0.026123 MiB name: Session_Pool 00:09:34.135 end mempools------- 00:09:34.135 6 memzones totaling size 4.142822 MiB 00:09:34.135 size: 1.000366 MiB name: RG_ring_0_3199618 00:09:34.135 size: 1.000366 MiB name: RG_ring_1_3199618 00:09:34.135 size: 1.000366 MiB name: RG_ring_4_3199618 00:09:34.135 size: 1.000366 MiB name: RG_ring_5_3199618 00:09:34.135 size: 0.125366 MiB name: RG_ring_2_3199618 00:09:34.135 size: 0.015991 MiB name: RG_ring_3_3199618 00:09:34.135 end memzones------- 00:09:34.135 14:07:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:09:34.395 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:09:34.395 list of free elements. size: 10.862488 MiB 00:09:34.395 element at address: 0x200018a00000 with size: 0.999878 MiB 00:09:34.395 element at address: 0x200018c00000 with size: 0.999878 MiB 00:09:34.395 element at address: 0x200000400000 with size: 0.998535 MiB 00:09:34.395 element at address: 0x200031800000 with size: 0.994446 MiB 00:09:34.395 element at address: 0x200006400000 with size: 0.959839 MiB 00:09:34.395 element at address: 0x200012c00000 with size: 0.954285 MiB 00:09:34.395 element at address: 0x200018e00000 with size: 0.936584 MiB 00:09:34.395 element at address: 0x200000200000 with size: 0.717346 MiB 00:09:34.395 element at address: 0x20001a600000 with size: 0.582886 MiB 00:09:34.395 element at address: 0x200000c00000 with size: 0.495422 MiB 00:09:34.395 element at address: 0x20000a600000 with size: 0.490723 MiB 00:09:34.395 element at address: 0x200019000000 with size: 0.485657 MiB 00:09:34.395 element at address: 0x200003e00000 with size: 0.481934 MiB 00:09:34.395 element at address: 0x200027a00000 with size: 0.410034 MiB 00:09:34.395 element at address: 0x200000800000 with size: 0.355042 MiB 00:09:34.395 list of standard malloc elements. size: 199.218628 MiB 00:09:34.395 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:09:34.395 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:09:34.395 element at address: 0x200018afff80 with size: 1.000122 MiB 00:09:34.395 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:09:34.395 element at address: 0x200018efff80 with size: 1.000122 MiB 00:09:34.395 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:34.395 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:09:34.395 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:34.395 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:09:34.395 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:34.395 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:34.395 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:09:34.395 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:09:34.395 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:09:34.395 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:09:34.395 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:09:34.395 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:09:34.395 element at address: 0x20000085b040 with size: 0.000183 MiB 00:09:34.395 element at address: 0x20000085f300 with size: 0.000183 MiB 00:09:34.395 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:09:34.395 element at address: 0x20000087f680 with size: 0.000183 MiB 00:09:34.395 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:09:34.395 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:09:34.395 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:09:34.395 element at address: 0x200000cff000 with size: 0.000183 MiB 00:09:34.395 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:09:34.395 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:09:34.395 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:09:34.395 element at address: 0x200003efb980 with size: 0.000183 MiB 00:09:34.395 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:09:34.396 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:09:34.396 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:09:34.396 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:09:34.396 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:09:34.396 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:09:34.396 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:09:34.396 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:09:34.396 element at address: 0x20001a695380 with size: 0.000183 MiB 00:09:34.396 element at address: 0x20001a695440 with size: 0.000183 MiB 00:09:34.396 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:09:34.396 element at address: 0x200027a69040 with size: 0.000183 MiB 00:09:34.396 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:09:34.396 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:09:34.396 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:09:34.396 list of memzone associated elements. size: 599.918884 MiB 00:09:34.396 element at address: 0x20001a695500 with size: 211.416748 MiB 00:09:34.396 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:34.396 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:09:34.396 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:34.396 element at address: 0x200012df4780 with size: 92.045044 MiB 00:09:34.396 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3199618_0 00:09:34.396 element at address: 0x200000dff380 with size: 48.003052 MiB 00:09:34.396 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3199618_0 00:09:34.396 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:09:34.396 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3199618_0 00:09:34.396 element at address: 0x2000191be940 with size: 20.255554 MiB 00:09:34.396 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:34.396 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:09:34.396 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:34.396 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:09:34.396 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3199618_0 00:09:34.396 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:09:34.396 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3199618 00:09:34.396 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:34.396 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3199618 00:09:34.396 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:09:34.396 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:34.396 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:09:34.396 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:34.396 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:09:34.396 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:34.396 element at address: 0x200003efba40 with size: 1.008118 MiB 00:09:34.396 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:34.396 element at address: 0x200000cff180 with size: 1.000488 MiB 00:09:34.396 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3199618 00:09:34.396 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:09:34.396 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3199618 00:09:34.396 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:09:34.396 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3199618 00:09:34.396 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:09:34.396 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3199618 00:09:34.396 element at address: 0x20000087f740 with size: 0.500488 MiB 00:09:34.396 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3199618 00:09:34.396 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:09:34.396 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3199618 00:09:34.396 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:09:34.396 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:34.396 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:09:34.396 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:34.396 element at address: 0x20001907c540 with size: 0.250488 MiB 00:09:34.396 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:34.396 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:09:34.396 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3199618 00:09:34.396 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:09:34.396 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3199618 00:09:34.396 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:09:34.396 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:34.396 element at address: 0x200027a69100 with size: 0.023743 MiB 00:09:34.396 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:34.396 element at address: 0x20000085b100 with size: 0.016113 MiB 00:09:34.396 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3199618 00:09:34.396 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:09:34.396 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:34.396 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:09:34.396 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3199618 00:09:34.396 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:09:34.396 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3199618 00:09:34.396 element at address: 0x20000085af00 with size: 0.000305 MiB 00:09:34.396 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3199618 00:09:34.396 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:09:34.396 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:34.396 14:07:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:34.396 14:07:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3199618 00:09:34.396 14:07:39 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3199618 ']' 00:09:34.396 14:07:39 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3199618 00:09:34.396 14:07:39 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:09:34.396 14:07:39 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.396 14:07:39 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3199618 00:09:34.396 14:07:39 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:34.396 14:07:39 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:34.396 14:07:39 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3199618' 00:09:34.396 killing process with pid 3199618 00:09:34.396 14:07:39 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3199618 00:09:34.396 14:07:39 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3199618 00:09:34.658 00:09:34.658 real 0m1.429s 00:09:34.658 user 0m1.513s 00:09:34.658 sys 0m0.428s 00:09:34.658 14:07:39 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.658 14:07:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:34.658 ************************************ 00:09:34.658 END TEST dpdk_mem_utility 00:09:34.658 ************************************ 00:09:34.658 14:07:39 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:09:34.658 14:07:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:34.658 14:07:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.658 14:07:39 -- common/autotest_common.sh@10 -- # set +x 00:09:34.658 ************************************ 00:09:34.658 START TEST event 00:09:34.658 ************************************ 00:09:34.658 14:07:39 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:09:34.658 * Looking for test storage... 00:09:34.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:34.658 14:07:39 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:34.658 14:07:39 event -- common/autotest_common.sh@1693 -- # lcov --version 00:09:34.658 14:07:39 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:34.658 14:07:39 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:34.658 14:07:39 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.658 14:07:39 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.658 14:07:39 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.658 14:07:39 event -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.658 14:07:39 event -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.919 14:07:39 event -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.919 14:07:39 event -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.919 14:07:39 event -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.919 14:07:39 event -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.919 14:07:39 event -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.920 14:07:39 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.920 14:07:39 event -- scripts/common.sh@344 -- # case "$op" in 00:09:34.920 14:07:39 event -- scripts/common.sh@345 -- # : 1 00:09:34.920 14:07:39 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.920 14:07:39 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.920 14:07:39 event -- scripts/common.sh@365 -- # decimal 1 00:09:34.920 14:07:39 event -- scripts/common.sh@353 -- # local d=1 00:09:34.920 14:07:39 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.920 14:07:39 event -- scripts/common.sh@355 -- # echo 1 00:09:34.920 14:07:39 event -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.920 14:07:39 event -- scripts/common.sh@366 -- # decimal 2 00:09:34.920 14:07:39 event -- scripts/common.sh@353 -- # local d=2 00:09:34.920 14:07:39 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.920 14:07:39 event -- scripts/common.sh@355 -- # echo 2 00:09:34.920 14:07:39 event -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.920 14:07:39 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.920 14:07:39 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.920 14:07:39 event -- scripts/common.sh@368 -- # return 0 00:09:34.920 14:07:39 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.920 14:07:39 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:34.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.920 --rc genhtml_branch_coverage=1 00:09:34.920 --rc genhtml_function_coverage=1 00:09:34.920 --rc genhtml_legend=1 00:09:34.920 --rc geninfo_all_blocks=1 00:09:34.920 --rc geninfo_unexecuted_blocks=1 00:09:34.920 00:09:34.920 ' 00:09:34.920 14:07:39 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:34.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.920 --rc genhtml_branch_coverage=1 00:09:34.920 --rc genhtml_function_coverage=1 00:09:34.920 --rc genhtml_legend=1 00:09:34.920 --rc geninfo_all_blocks=1 00:09:34.920 --rc geninfo_unexecuted_blocks=1 00:09:34.920 00:09:34.920 ' 00:09:34.920 14:07:39 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:34.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.920 --rc genhtml_branch_coverage=1 00:09:34.920 --rc genhtml_function_coverage=1 00:09:34.920 --rc genhtml_legend=1 00:09:34.920 --rc geninfo_all_blocks=1 00:09:34.920 --rc geninfo_unexecuted_blocks=1 00:09:34.920 00:09:34.920 ' 00:09:34.920 14:07:39 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:34.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.920 --rc genhtml_branch_coverage=1 00:09:34.920 --rc genhtml_function_coverage=1 00:09:34.920 --rc genhtml_legend=1 00:09:34.920 --rc geninfo_all_blocks=1 00:09:34.920 --rc geninfo_unexecuted_blocks=1 00:09:34.920 00:09:34.920 ' 00:09:34.920 14:07:39 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:34.920 14:07:39 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:34.920 14:07:39 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:34.920 14:07:39 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:34.920 14:07:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.920 14:07:39 event -- common/autotest_common.sh@10 -- # set +x 00:09:34.920 ************************************ 00:09:34.920 START TEST event_perf 00:09:34.920 ************************************ 00:09:34.920 14:07:39 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:34.920 Running I/O for 1 seconds...[2024-11-25 14:07:39.829345] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:09:34.920 [2024-11-25 14:07:39.829449] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3199960 ] 00:09:34.920 [2024-11-25 14:07:39.916526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.920 [2024-11-25 14:07:39.953193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.920 [2024-11-25 14:07:39.953294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.920 [2024-11-25 14:07:39.953451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.920 [2024-11-25 14:07:39.953453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:36.311 Running I/O for 1 seconds... 00:09:36.311 lcore 0: 174801 00:09:36.311 lcore 1: 174804 00:09:36.311 lcore 2: 174800 00:09:36.311 lcore 3: 174800 00:09:36.311 done. 00:09:36.311 00:09:36.311 real 0m1.175s 00:09:36.311 user 0m4.089s 00:09:36.311 sys 0m0.082s 00:09:36.311 14:07:40 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.311 14:07:40 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:36.311 ************************************ 00:09:36.311 END TEST event_perf 00:09:36.311 ************************************ 00:09:36.311 14:07:41 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:36.311 14:07:41 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:36.311 14:07:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.311 14:07:41 event -- common/autotest_common.sh@10 -- # set +x 00:09:36.311 ************************************ 00:09:36.311 START TEST event_reactor 00:09:36.311 ************************************ 00:09:36.311 14:07:41 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:36.311 [2024-11-25 14:07:41.078186] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:09:36.311 [2024-11-25 14:07:41.078273] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3200307 ] 00:09:36.311 [2024-11-25 14:07:41.168635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.311 [2024-11-25 14:07:41.198437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.253 test_start 00:09:37.253 oneshot 00:09:37.253 tick 100 00:09:37.253 tick 100 00:09:37.253 tick 250 00:09:37.253 tick 100 00:09:37.253 tick 100 00:09:37.253 tick 250 00:09:37.253 tick 100 00:09:37.253 tick 500 00:09:37.253 tick 100 00:09:37.253 tick 100 00:09:37.253 tick 250 00:09:37.253 tick 100 00:09:37.253 tick 100 00:09:37.253 test_end 00:09:37.253 00:09:37.253 real 0m1.167s 00:09:37.253 user 0m1.086s 00:09:37.253 sys 0m0.077s 00:09:37.253 14:07:42 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.253 14:07:42 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:37.253 ************************************ 00:09:37.253 END TEST event_reactor 00:09:37.253 ************************************ 00:09:37.253 14:07:42 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:37.253 14:07:42 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:37.253 14:07:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.253 14:07:42 event -- common/autotest_common.sh@10 -- # set +x 00:09:37.253 ************************************ 00:09:37.253 START TEST event_reactor_perf 00:09:37.253 ************************************ 00:09:37.253 14:07:42 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:37.253 [2024-11-25 14:07:42.324591] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:09:37.253 [2024-11-25 14:07:42.324695] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3200663 ] 00:09:37.513 [2024-11-25 14:07:42.411260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.513 [2024-11-25 14:07:42.441519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.455 test_start 00:09:38.455 test_end 00:09:38.455 Performance: 537103 events per second 00:09:38.455 00:09:38.455 real 0m1.163s 00:09:38.455 user 0m1.085s 00:09:38.455 sys 0m0.075s 00:09:38.455 14:07:43 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.455 14:07:43 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:38.455 ************************************ 00:09:38.455 END TEST event_reactor_perf 00:09:38.455 ************************************ 00:09:38.455 14:07:43 event -- event/event.sh@49 -- # uname -s 00:09:38.455 14:07:43 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:38.455 14:07:43 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:38.455 14:07:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.455 14:07:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.455 14:07:43 event -- common/autotest_common.sh@10 -- # set +x 00:09:38.716 ************************************ 00:09:38.716 START TEST event_scheduler 00:09:38.716 ************************************ 00:09:38.716 14:07:43 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:38.716 * Looking for test storage... 00:09:38.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:09:38.716 14:07:43 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:38.716 14:07:43 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:09:38.716 14:07:43 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:38.716 14:07:43 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:38.716 14:07:43 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.716 14:07:43 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.716 14:07:43 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.716 14:07:43 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.716 14:07:43 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.716 14:07:43 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.716 14:07:43 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.716 14:07:43 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.716 14:07:43 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.716 14:07:43 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.716 14:07:43 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.716 14:07:43 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:09:38.716 14:07:43 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:09:38.716 14:07:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.716 14:07:43 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.716 14:07:43 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:09:38.716 14:07:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:09:38.716 14:07:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.717 14:07:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:09:38.717 14:07:43 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.717 14:07:43 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:09:38.717 14:07:43 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:09:38.717 14:07:43 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.717 14:07:43 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:09:38.717 14:07:43 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.717 14:07:43 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.717 14:07:43 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.717 14:07:43 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:09:38.717 14:07:43 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.717 14:07:43 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:38.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.717 --rc genhtml_branch_coverage=1 00:09:38.717 --rc genhtml_function_coverage=1 00:09:38.717 --rc genhtml_legend=1 00:09:38.717 --rc geninfo_all_blocks=1 00:09:38.717 --rc geninfo_unexecuted_blocks=1 00:09:38.717 00:09:38.717 ' 00:09:38.717 14:07:43 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:38.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.717 --rc genhtml_branch_coverage=1 00:09:38.717 --rc genhtml_function_coverage=1 00:09:38.717 --rc genhtml_legend=1 00:09:38.717 --rc geninfo_all_blocks=1 00:09:38.717 --rc geninfo_unexecuted_blocks=1 00:09:38.717 00:09:38.717 ' 00:09:38.717 14:07:43 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:38.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.717 --rc genhtml_branch_coverage=1 00:09:38.717 --rc genhtml_function_coverage=1 00:09:38.717 --rc genhtml_legend=1 00:09:38.717 --rc geninfo_all_blocks=1 00:09:38.717 --rc geninfo_unexecuted_blocks=1 00:09:38.717 00:09:38.717 ' 00:09:38.717 14:07:43 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:38.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.717 --rc genhtml_branch_coverage=1 00:09:38.717 --rc genhtml_function_coverage=1 00:09:38.717 --rc genhtml_legend=1 00:09:38.717 --rc geninfo_all_blocks=1 00:09:38.717 --rc geninfo_unexecuted_blocks=1 00:09:38.717 00:09:38.717 ' 00:09:38.717 14:07:43 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:38.717 14:07:43 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3200988 00:09:38.717 14:07:43 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:38.717 14:07:43 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:38.717 14:07:43 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3200988 00:09:38.717 14:07:43 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3200988 ']' 00:09:38.717 14:07:43 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.717 14:07:43 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.717 14:07:43 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.717 14:07:43 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.717 14:07:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:38.717 [2024-11-25 14:07:43.804793] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:09:38.717 [2024-11-25 14:07:43.804862] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3200988 ] 00:09:38.978 [2024-11-25 14:07:43.899797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.978 [2024-11-25 14:07:43.955215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.978 [2024-11-25 14:07:43.955315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.978 [2024-11-25 14:07:43.955479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.978 [2024-11-25 14:07:43.955478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:39.549 14:07:44 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.549 14:07:44 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:09:39.549 14:07:44 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:39.549 14:07:44 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.549 14:07:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:39.549 [2024-11-25 14:07:44.629943] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:09:39.549 [2024-11-25 14:07:44.629962] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:09:39.549 [2024-11-25 14:07:44.629973] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:39.549 [2024-11-25 14:07:44.629979] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:39.549 [2024-11-25 14:07:44.629985] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:39.549 14:07:44 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.549 14:07:44 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:39.549 14:07:44 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.549 14:07:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:39.810 [2024-11-25 14:07:44.692603] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:39.810 14:07:44 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.810 14:07:44 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:39.810 14:07:44 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:39.810 14:07:44 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.811 14:07:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:39.811 ************************************ 00:09:39.811 START TEST scheduler_create_thread 00:09:39.811 ************************************ 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:39.811 2 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:39.811 3 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:39.811 4 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:39.811 5 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:39.811 6 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:39.811 7 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:39.811 8 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:39.811 9 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.811 14:07:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.381 10 00:09:40.381 14:07:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.381 14:07:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:40.381 14:07:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.381 14:07:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:41.762 14:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.762 14:07:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:41.762 14:07:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:41.762 14:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.762 14:07:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:42.700 14:07:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.700 14:07:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:42.700 14:07:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.700 14:07:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:43.269 14:07:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.269 14:07:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:43.269 14:07:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:43.269 14:07:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.269 14:07:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.209 14:07:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.209 00:09:44.209 real 0m4.225s 00:09:44.209 user 0m0.022s 00:09:44.209 sys 0m0.011s 00:09:44.209 14:07:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.209 14:07:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.209 ************************************ 00:09:44.209 END TEST scheduler_create_thread 00:09:44.210 ************************************ 00:09:44.210 14:07:48 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:44.210 14:07:48 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3200988 00:09:44.210 14:07:48 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3200988 ']' 00:09:44.210 14:07:48 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3200988 00:09:44.210 14:07:48 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:09:44.210 14:07:48 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.210 14:07:49 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3200988 00:09:44.210 14:07:49 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:44.210 14:07:49 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:44.210 14:07:49 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3200988' 00:09:44.210 killing process with pid 3200988 00:09:44.210 14:07:49 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3200988 00:09:44.210 14:07:49 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3200988 00:09:44.470 [2024-11-25 14:07:49.334554] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:44.470 00:09:44.470 real 0m5.940s 00:09:44.470 user 0m13.842s 00:09:44.470 sys 0m0.446s 00:09:44.470 14:07:49 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.470 14:07:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:44.470 ************************************ 00:09:44.470 END TEST event_scheduler 00:09:44.470 ************************************ 00:09:44.470 14:07:49 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:44.470 14:07:49 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:44.470 14:07:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:44.470 14:07:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.470 14:07:49 event -- common/autotest_common.sh@10 -- # set +x 00:09:44.730 ************************************ 00:09:44.731 START TEST app_repeat 00:09:44.731 ************************************ 00:09:44.731 14:07:49 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:09:44.731 14:07:49 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:44.731 14:07:49 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:44.731 14:07:49 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:44.731 14:07:49 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:44.731 14:07:49 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:44.731 14:07:49 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:44.731 14:07:49 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:44.731 14:07:49 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3202122 00:09:44.731 14:07:49 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:44.731 14:07:49 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:44.731 14:07:49 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3202122' 00:09:44.731 Process app_repeat pid: 3202122 00:09:44.731 14:07:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:44.731 14:07:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:44.731 spdk_app_start Round 0 00:09:44.731 14:07:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3202122 /var/tmp/spdk-nbd.sock 00:09:44.731 14:07:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3202122 ']' 00:09:44.731 14:07:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:44.731 14:07:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.731 14:07:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:44.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:44.731 14:07:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.731 14:07:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:44.731 [2024-11-25 14:07:49.611405] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:09:44.731 [2024-11-25 14:07:49.611470] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3202122 ] 00:09:44.731 [2024-11-25 14:07:49.696546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:44.731 [2024-11-25 14:07:49.729669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.731 [2024-11-25 14:07:49.729669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.731 14:07:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.731 14:07:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:44.731 14:07:49 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:44.991 Malloc0 00:09:44.991 14:07:49 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:45.251 Malloc1 00:09:45.251 14:07:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:45.251 14:07:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:45.251 14:07:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:45.251 14:07:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:45.251 14:07:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:45.251 14:07:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:45.251 14:07:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:45.251 14:07:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:45.251 14:07:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:45.251 14:07:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:45.251 14:07:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:45.251 14:07:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:45.251 14:07:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:45.251 14:07:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:45.251 14:07:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:45.251 14:07:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:45.511 /dev/nbd0 00:09:45.511 14:07:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:45.511 14:07:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:45.511 14:07:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:45.511 14:07:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:45.511 14:07:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:45.511 14:07:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:45.511 14:07:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:45.511 14:07:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:45.511 14:07:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:45.511 14:07:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:45.511 14:07:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:45.511 1+0 records in 00:09:45.511 1+0 records out 00:09:45.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210714 s, 19.4 MB/s 00:09:45.511 14:07:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:45.511 14:07:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:45.511 14:07:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:45.511 14:07:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:45.511 14:07:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:45.511 14:07:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:45.511 14:07:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:45.511 14:07:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:45.772 /dev/nbd1 00:09:45.772 14:07:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:45.772 14:07:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:45.772 14:07:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:45.772 14:07:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:45.772 14:07:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:45.772 14:07:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:45.772 14:07:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:45.772 14:07:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:45.772 14:07:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:45.772 14:07:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:45.772 14:07:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:45.772 1+0 records in 00:09:45.772 1+0 records out 00:09:45.772 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272394 s, 15.0 MB/s 00:09:45.772 14:07:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:45.772 14:07:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:45.772 14:07:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:45.772 14:07:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:45.772 14:07:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:45.772 14:07:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:45.773 14:07:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:45.773 14:07:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:45.773 14:07:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:45.773 14:07:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:45.773 14:07:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:45.773 { 00:09:45.773 "nbd_device": "/dev/nbd0", 00:09:45.773 "bdev_name": "Malloc0" 00:09:45.773 }, 00:09:45.773 { 00:09:45.773 "nbd_device": "/dev/nbd1", 00:09:45.773 "bdev_name": "Malloc1" 00:09:45.773 } 00:09:45.773 ]' 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:46.034 { 00:09:46.034 "nbd_device": "/dev/nbd0", 00:09:46.034 "bdev_name": "Malloc0" 00:09:46.034 }, 00:09:46.034 { 00:09:46.034 "nbd_device": "/dev/nbd1", 00:09:46.034 "bdev_name": "Malloc1" 00:09:46.034 } 00:09:46.034 ]' 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:46.034 /dev/nbd1' 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:46.034 /dev/nbd1' 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:46.034 256+0 records in 00:09:46.034 256+0 records out 00:09:46.034 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120887 s, 86.7 MB/s 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:46.034 256+0 records in 00:09:46.034 256+0 records out 00:09:46.034 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116496 s, 90.0 MB/s 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:46.034 256+0 records in 00:09:46.034 256+0 records out 00:09:46.034 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128042 s, 81.9 MB/s 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.034 14:07:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:46.296 14:07:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:46.296 14:07:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:46.296 14:07:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:46.296 14:07:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:46.296 14:07:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:46.296 14:07:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:46.296 14:07:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:46.296 14:07:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:46.296 14:07:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.296 14:07:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:46.296 14:07:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:46.296 14:07:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:46.296 14:07:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:46.296 14:07:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:46.296 14:07:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:46.296 14:07:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:46.296 14:07:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:46.296 14:07:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:46.296 14:07:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:46.296 14:07:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.296 14:07:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:46.557 14:07:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:46.557 14:07:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:46.557 14:07:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:46.557 14:07:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:46.557 14:07:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:46.557 14:07:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:46.557 14:07:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:46.557 14:07:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:46.557 14:07:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:46.557 14:07:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:46.557 14:07:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:46.557 14:07:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:46.557 14:07:51 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:46.818 14:07:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:46.818 [2024-11-25 14:07:51.889546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:47.079 [2024-11-25 14:07:51.917818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.079 [2024-11-25 14:07:51.917818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.079 [2024-11-25 14:07:51.946786] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:47.079 [2024-11-25 14:07:51.946819] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:50.377 14:07:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:50.377 14:07:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:50.377 spdk_app_start Round 1 00:09:50.377 14:07:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3202122 /var/tmp/spdk-nbd.sock 00:09:50.377 14:07:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3202122 ']' 00:09:50.377 14:07:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:50.377 14:07:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.377 14:07:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:50.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:50.377 14:07:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.377 14:07:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:50.377 14:07:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.377 14:07:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:50.377 14:07:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:50.377 Malloc0 00:09:50.377 14:07:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:50.377 Malloc1 00:09:50.377 14:07:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:50.377 14:07:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.377 14:07:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:50.377 14:07:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:50.377 14:07:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:50.377 14:07:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:50.377 14:07:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:50.377 14:07:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.377 14:07:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:50.377 14:07:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:50.377 14:07:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:50.377 14:07:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:50.377 14:07:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:50.377 14:07:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:50.377 14:07:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:50.377 14:07:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:50.638 /dev/nbd0 00:09:50.638 14:07:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:50.638 14:07:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:50.638 14:07:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:50.638 14:07:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:50.638 14:07:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:50.638 14:07:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:50.638 14:07:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:50.638 14:07:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:50.638 14:07:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:50.638 14:07:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:50.638 14:07:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:50.638 1+0 records in 00:09:50.638 1+0 records out 00:09:50.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000165768 s, 24.7 MB/s 00:09:50.638 14:07:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:50.638 14:07:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:50.638 14:07:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:50.638 14:07:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:50.638 14:07:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:50.638 14:07:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:50.638 14:07:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:50.638 14:07:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:50.898 /dev/nbd1 00:09:50.898 14:07:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:50.898 14:07:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:50.898 14:07:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:50.898 14:07:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:50.898 14:07:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:50.898 14:07:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:50.898 14:07:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:50.898 14:07:55 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:50.898 14:07:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:50.898 14:07:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:50.898 14:07:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:50.898 1+0 records in 00:09:50.898 1+0 records out 00:09:50.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275851 s, 14.8 MB/s 00:09:50.898 14:07:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:50.898 14:07:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:50.898 14:07:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:50.898 14:07:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:50.898 14:07:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:50.898 14:07:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:50.898 14:07:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:50.898 14:07:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:50.898 14:07:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.899 14:07:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:51.161 { 00:09:51.161 "nbd_device": "/dev/nbd0", 00:09:51.161 "bdev_name": "Malloc0" 00:09:51.161 }, 00:09:51.161 { 00:09:51.161 "nbd_device": "/dev/nbd1", 00:09:51.161 "bdev_name": "Malloc1" 00:09:51.161 } 00:09:51.161 ]' 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:51.161 { 00:09:51.161 "nbd_device": "/dev/nbd0", 00:09:51.161 "bdev_name": "Malloc0" 00:09:51.161 }, 00:09:51.161 { 00:09:51.161 "nbd_device": "/dev/nbd1", 00:09:51.161 "bdev_name": "Malloc1" 00:09:51.161 } 00:09:51.161 ]' 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:51.161 /dev/nbd1' 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:51.161 /dev/nbd1' 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:51.161 256+0 records in 00:09:51.161 256+0 records out 00:09:51.161 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127237 s, 82.4 MB/s 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:51.161 256+0 records in 00:09:51.161 256+0 records out 00:09:51.161 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122294 s, 85.7 MB/s 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:51.161 256+0 records in 00:09:51.161 256+0 records out 00:09:51.161 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130082 s, 80.6 MB/s 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:51.161 14:07:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:51.423 14:07:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:51.423 14:07:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:51.423 14:07:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:51.423 14:07:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:51.423 14:07:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:51.423 14:07:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:51.423 14:07:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:51.423 14:07:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:51.423 14:07:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:51.423 14:07:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:51.683 14:07:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:51.683 14:07:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:51.683 14:07:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:51.683 14:07:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:51.683 14:07:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:51.683 14:07:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:51.683 14:07:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:51.683 14:07:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:51.683 14:07:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:51.683 14:07:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.683 14:07:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:51.683 14:07:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:51.683 14:07:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:51.683 14:07:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:51.943 14:07:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:51.943 14:07:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:51.943 14:07:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:51.943 14:07:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:51.943 14:07:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:51.943 14:07:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:51.943 14:07:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:51.943 14:07:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:51.943 14:07:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:51.943 14:07:56 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:51.943 14:07:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:52.204 [2024-11-25 14:07:57.050936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:52.204 [2024-11-25 14:07:57.079840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.204 [2024-11-25 14:07:57.079841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.204 [2024-11-25 14:07:57.109564] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:52.204 [2024-11-25 14:07:57.109595] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:55.501 14:07:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:55.501 14:07:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:55.501 spdk_app_start Round 2 00:09:55.501 14:07:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3202122 /var/tmp/spdk-nbd.sock 00:09:55.501 14:07:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3202122 ']' 00:09:55.501 14:07:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:55.501 14:07:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.501 14:07:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:55.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:55.501 14:07:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.501 14:07:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:55.501 14:08:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.501 14:08:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:55.501 14:08:00 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:55.501 Malloc0 00:09:55.501 14:08:00 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:55.501 Malloc1 00:09:55.501 14:08:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:55.501 14:08:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.501 14:08:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:55.501 14:08:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:55.501 14:08:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:55.501 14:08:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:55.501 14:08:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:55.501 14:08:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.501 14:08:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:55.501 14:08:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:55.501 14:08:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:55.501 14:08:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:55.501 14:08:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:55.501 14:08:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:55.501 14:08:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:55.501 14:08:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:55.763 /dev/nbd0 00:09:55.763 14:08:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:55.763 14:08:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:55.763 14:08:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:55.763 14:08:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:55.763 14:08:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:55.763 14:08:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:55.763 14:08:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:55.763 14:08:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:55.763 14:08:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:55.763 14:08:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:55.763 14:08:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:55.763 1+0 records in 00:09:55.763 1+0 records out 00:09:55.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272355 s, 15.0 MB/s 00:09:55.763 14:08:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:55.763 14:08:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:55.763 14:08:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:55.763 14:08:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:55.763 14:08:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:55.763 14:08:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:55.763 14:08:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:55.763 14:08:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:56.023 /dev/nbd1 00:09:56.023 14:08:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:56.023 14:08:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:56.023 14:08:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:56.023 14:08:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:56.023 14:08:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:56.023 14:08:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:56.023 14:08:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:56.023 14:08:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:56.023 14:08:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:56.023 14:08:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:56.023 14:08:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:56.023 1+0 records in 00:09:56.023 1+0 records out 00:09:56.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201379 s, 20.3 MB/s 00:09:56.023 14:08:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:56.023 14:08:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:56.023 14:08:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:56.023 14:08:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:56.023 14:08:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:56.023 14:08:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:56.023 14:08:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:56.023 14:08:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:56.023 14:08:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:56.023 14:08:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:56.285 { 00:09:56.285 "nbd_device": "/dev/nbd0", 00:09:56.285 "bdev_name": "Malloc0" 00:09:56.285 }, 00:09:56.285 { 00:09:56.285 "nbd_device": "/dev/nbd1", 00:09:56.285 "bdev_name": "Malloc1" 00:09:56.285 } 00:09:56.285 ]' 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:56.285 { 00:09:56.285 "nbd_device": "/dev/nbd0", 00:09:56.285 "bdev_name": "Malloc0" 00:09:56.285 }, 00:09:56.285 { 00:09:56.285 "nbd_device": "/dev/nbd1", 00:09:56.285 "bdev_name": "Malloc1" 00:09:56.285 } 00:09:56.285 ]' 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:56.285 /dev/nbd1' 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:56.285 /dev/nbd1' 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:56.285 256+0 records in 00:09:56.285 256+0 records out 00:09:56.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127476 s, 82.3 MB/s 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:56.285 256+0 records in 00:09:56.285 256+0 records out 00:09:56.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119597 s, 87.7 MB/s 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:56.285 256+0 records in 00:09:56.285 256+0 records out 00:09:56.285 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128374 s, 81.7 MB/s 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:56.285 14:08:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:56.546 14:08:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:56.546 14:08:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:56.546 14:08:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:56.546 14:08:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:56.546 14:08:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:56.546 14:08:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:56.546 14:08:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:56.546 14:08:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:56.546 14:08:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:56.546 14:08:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:56.807 14:08:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:56.807 14:08:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:56.807 14:08:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:56.807 14:08:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:56.807 14:08:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:56.807 14:08:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:56.807 14:08:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:56.807 14:08:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:56.807 14:08:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:56.807 14:08:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:56.807 14:08:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:56.807 14:08:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:56.807 14:08:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:56.807 14:08:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:57.069 14:08:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:57.069 14:08:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:57.069 14:08:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:57.069 14:08:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:57.069 14:08:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:57.069 14:08:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:57.069 14:08:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:57.069 14:08:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:57.069 14:08:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:57.069 14:08:01 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:57.069 14:08:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:57.329 [2024-11-25 14:08:02.204146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:57.329 [2024-11-25 14:08:02.232543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.329 [2024-11-25 14:08:02.232544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.329 [2024-11-25 14:08:02.261587] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:57.329 [2024-11-25 14:08:02.261619] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:00.773 14:08:05 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3202122 /var/tmp/spdk-nbd.sock 00:10:00.773 14:08:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3202122 ']' 00:10:00.773 14:08:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:00.773 14:08:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.773 14:08:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:00.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:00.773 14:08:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.773 14:08:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:00.773 14:08:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.773 14:08:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:10:00.773 14:08:05 event.app_repeat -- event/event.sh@39 -- # killprocess 3202122 00:10:00.773 14:08:05 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3202122 ']' 00:10:00.773 14:08:05 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3202122 00:10:00.773 14:08:05 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:10:00.773 14:08:05 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.773 14:08:05 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3202122 00:10:00.773 14:08:05 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:00.773 14:08:05 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:00.773 14:08:05 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3202122' 00:10:00.773 killing process with pid 3202122 00:10:00.773 14:08:05 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3202122 00:10:00.773 14:08:05 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3202122 00:10:00.773 spdk_app_start is called in Round 0. 00:10:00.773 Shutdown signal received, stop current app iteration 00:10:00.773 Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 reinitialization... 00:10:00.773 spdk_app_start is called in Round 1. 00:10:00.773 Shutdown signal received, stop current app iteration 00:10:00.773 Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 reinitialization... 00:10:00.773 spdk_app_start is called in Round 2. 00:10:00.773 Shutdown signal received, stop current app iteration 00:10:00.773 Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 reinitialization... 00:10:00.773 spdk_app_start is called in Round 3. 00:10:00.773 Shutdown signal received, stop current app iteration 00:10:00.773 14:08:05 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:00.773 14:08:05 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:00.773 00:10:00.773 real 0m15.892s 00:10:00.773 user 0m34.965s 00:10:00.773 sys 0m2.294s 00:10:00.773 14:08:05 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.773 14:08:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:00.773 ************************************ 00:10:00.773 END TEST app_repeat 00:10:00.773 ************************************ 00:10:00.773 14:08:05 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:00.773 14:08:05 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:10:00.773 14:08:05 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:00.773 14:08:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.773 14:08:05 event -- common/autotest_common.sh@10 -- # set +x 00:10:00.773 ************************************ 00:10:00.773 START TEST cpu_locks 00:10:00.773 ************************************ 00:10:00.773 14:08:05 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:10:00.773 * Looking for test storage... 00:10:00.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:10:00.773 14:08:05 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:00.773 14:08:05 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:10:00.773 14:08:05 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:00.773 14:08:05 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.773 14:08:05 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:10:00.773 14:08:05 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.773 14:08:05 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:00.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.773 --rc genhtml_branch_coverage=1 00:10:00.773 --rc genhtml_function_coverage=1 00:10:00.773 --rc genhtml_legend=1 00:10:00.773 --rc geninfo_all_blocks=1 00:10:00.773 --rc geninfo_unexecuted_blocks=1 00:10:00.773 00:10:00.773 ' 00:10:00.773 14:08:05 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:00.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.773 --rc genhtml_branch_coverage=1 00:10:00.773 --rc genhtml_function_coverage=1 00:10:00.773 --rc genhtml_legend=1 00:10:00.773 --rc geninfo_all_blocks=1 00:10:00.773 --rc geninfo_unexecuted_blocks=1 00:10:00.773 00:10:00.773 ' 00:10:00.773 14:08:05 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:00.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.773 --rc genhtml_branch_coverage=1 00:10:00.773 --rc genhtml_function_coverage=1 00:10:00.773 --rc genhtml_legend=1 00:10:00.773 --rc geninfo_all_blocks=1 00:10:00.773 --rc geninfo_unexecuted_blocks=1 00:10:00.773 00:10:00.773 ' 00:10:00.773 14:08:05 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:00.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.773 --rc genhtml_branch_coverage=1 00:10:00.773 --rc genhtml_function_coverage=1 00:10:00.773 --rc genhtml_legend=1 00:10:00.773 --rc geninfo_all_blocks=1 00:10:00.773 --rc geninfo_unexecuted_blocks=1 00:10:00.773 00:10:00.773 ' 00:10:00.773 14:08:05 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:00.773 14:08:05 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:00.773 14:08:05 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:00.773 14:08:05 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:00.773 14:08:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:00.773 14:08:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.773 14:08:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:00.773 ************************************ 00:10:00.773 START TEST default_locks 00:10:00.773 ************************************ 00:10:00.773 14:08:05 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:10:00.773 14:08:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3205662 00:10:00.773 14:08:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3205662 00:10:00.773 14:08:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:00.773 14:08:05 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3205662 ']' 00:10:00.773 14:08:05 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.774 14:08:05 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:00.774 14:08:05 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.774 14:08:05 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:00.774 14:08:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:00.774 [2024-11-25 14:08:05.841049] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:10:00.774 [2024-11-25 14:08:05.841101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3205662 ] 00:10:01.034 [2024-11-25 14:08:05.925352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.034 [2024-11-25 14:08:05.963853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.606 14:08:06 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.606 14:08:06 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:10:01.606 14:08:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3205662 00:10:01.606 14:08:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3205662 00:10:01.606 14:08:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:02.177 lslocks: write error 00:10:02.177 14:08:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3205662 00:10:02.177 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3205662 ']' 00:10:02.177 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3205662 00:10:02.177 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:10:02.177 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.177 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3205662 00:10:02.177 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.177 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.177 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3205662' 00:10:02.177 killing process with pid 3205662 00:10:02.177 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3205662 00:10:02.177 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3205662 00:10:02.438 14:08:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3205662 00:10:02.438 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:10:02.438 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3205662 00:10:02.438 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:02.438 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.438 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:02.438 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.438 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3205662 00:10:02.438 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3205662 ']' 00:10:02.438 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.438 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.438 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.438 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.438 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:02.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3205662) - No such process 00:10:02.438 ERROR: process (pid: 3205662) is no longer running 00:10:02.438 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.438 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:10:02.438 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:10:02.438 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:02.438 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:02.438 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:02.438 14:08:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:10:02.438 14:08:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:02.438 14:08:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:10:02.438 14:08:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:02.438 00:10:02.438 real 0m1.494s 00:10:02.438 user 0m1.608s 00:10:02.438 sys 0m0.535s 00:10:02.438 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.438 14:08:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:02.438 ************************************ 00:10:02.438 END TEST default_locks 00:10:02.438 ************************************ 00:10:02.438 14:08:07 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:02.438 14:08:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:02.438 14:08:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.438 14:08:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:02.438 ************************************ 00:10:02.438 START TEST default_locks_via_rpc 00:10:02.438 ************************************ 00:10:02.438 14:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:10:02.438 14:08:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3205957 00:10:02.438 14:08:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3205957 00:10:02.438 14:08:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:02.438 14:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3205957 ']' 00:10:02.438 14:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.438 14:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.438 14:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.438 14:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.438 14:08:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.438 [2024-11-25 14:08:07.417130] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:10:02.438 [2024-11-25 14:08:07.417218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3205957 ] 00:10:02.438 [2024-11-25 14:08:07.506210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.698 [2024-11-25 14:08:07.546179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.269 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.269 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:03.269 14:08:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:03.269 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.269 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.269 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.269 14:08:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:10:03.269 14:08:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:03.269 14:08:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:10:03.269 14:08:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:03.269 14:08:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:03.269 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.269 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:03.269 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.269 14:08:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3205957 00:10:03.269 14:08:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3205957 00:10:03.269 14:08:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:03.839 14:08:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3205957 00:10:03.839 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3205957 ']' 00:10:03.839 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3205957 00:10:03.839 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:10:03.839 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:03.839 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3205957 00:10:03.839 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:03.839 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:03.839 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3205957' 00:10:03.839 killing process with pid 3205957 00:10:03.839 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3205957 00:10:03.839 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3205957 00:10:04.100 00:10:04.100 real 0m1.609s 00:10:04.100 user 0m1.749s 00:10:04.100 sys 0m0.563s 00:10:04.100 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.100 14:08:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:04.100 ************************************ 00:10:04.100 END TEST default_locks_via_rpc 00:10:04.100 ************************************ 00:10:04.100 14:08:09 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:04.100 14:08:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:04.100 14:08:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.100 14:08:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:04.100 ************************************ 00:10:04.100 START TEST non_locking_app_on_locked_coremask 00:10:04.100 ************************************ 00:10:04.100 14:08:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:10:04.100 14:08:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3206323 00:10:04.100 14:08:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3206323 /var/tmp/spdk.sock 00:10:04.100 14:08:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:04.100 14:08:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3206323 ']' 00:10:04.100 14:08:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.100 14:08:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.100 14:08:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.100 14:08:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.100 14:08:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:04.100 [2024-11-25 14:08:09.099773] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:10:04.100 [2024-11-25 14:08:09.099828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3206323 ] 00:10:04.100 [2024-11-25 14:08:09.187065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.360 [2024-11-25 14:08:09.221578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.931 14:08:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.931 14:08:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:04.931 14:08:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3206455 00:10:04.931 14:08:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3206455 /var/tmp/spdk2.sock 00:10:04.931 14:08:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:04.931 14:08:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3206455 ']' 00:10:04.931 14:08:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:04.931 14:08:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.931 14:08:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:04.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:04.931 14:08:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.931 14:08:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:04.931 [2024-11-25 14:08:09.943781] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:10:04.931 [2024-11-25 14:08:09.943834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3206455 ] 00:10:05.191 [2024-11-25 14:08:10.031354] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:05.191 [2024-11-25 14:08:10.031382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.191 [2024-11-25 14:08:10.091074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.765 14:08:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.765 14:08:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:05.765 14:08:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3206323 00:10:05.765 14:08:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3206323 00:10:05.765 14:08:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:06.337 lslocks: write error 00:10:06.337 14:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3206323 00:10:06.337 14:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3206323 ']' 00:10:06.337 14:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3206323 00:10:06.337 14:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:06.337 14:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.337 14:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3206323 00:10:06.338 14:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:06.338 14:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:06.338 14:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3206323' 00:10:06.338 killing process with pid 3206323 00:10:06.338 14:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3206323 00:10:06.338 14:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3206323 00:10:06.910 14:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3206455 00:10:06.910 14:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3206455 ']' 00:10:06.910 14:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3206455 00:10:06.910 14:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:06.910 14:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.910 14:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3206455 00:10:06.910 14:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:06.910 14:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:06.910 14:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3206455' 00:10:06.910 killing process with pid 3206455 00:10:06.910 14:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3206455 00:10:06.910 14:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3206455 00:10:06.910 00:10:06.910 real 0m2.940s 00:10:06.910 user 0m3.269s 00:10:06.910 sys 0m0.923s 00:10:06.910 14:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.910 14:08:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:06.910 ************************************ 00:10:06.910 END TEST non_locking_app_on_locked_coremask 00:10:06.910 ************************************ 00:10:07.171 14:08:12 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:07.171 14:08:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:07.171 14:08:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.171 14:08:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:07.171 ************************************ 00:10:07.171 START TEST locking_app_on_unlocked_coremask 00:10:07.171 ************************************ 00:10:07.171 14:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:10:07.171 14:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3206848 00:10:07.171 14:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3206848 /var/tmp/spdk.sock 00:10:07.171 14:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:07.171 14:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3206848 ']' 00:10:07.171 14:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.171 14:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.171 14:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.171 14:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.171 14:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:07.171 [2024-11-25 14:08:12.112451] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:10:07.171 [2024-11-25 14:08:12.112506] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3206848 ] 00:10:07.171 [2024-11-25 14:08:12.198355] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:07.171 [2024-11-25 14:08:12.198381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.171 [2024-11-25 14:08:12.231459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.114 14:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.114 14:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:08.114 14:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:08.114 14:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3207163 00:10:08.114 14:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3207163 /var/tmp/spdk2.sock 00:10:08.114 14:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3207163 ']' 00:10:08.114 14:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:08.114 14:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.114 14:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:08.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:08.114 14:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.114 14:08:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:08.114 [2024-11-25 14:08:12.932817] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:10:08.114 [2024-11-25 14:08:12.932867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207163 ] 00:10:08.114 [2024-11-25 14:08:13.020022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.114 [2024-11-25 14:08:13.078197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.685 14:08:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.685 14:08:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:08.685 14:08:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3207163 00:10:08.686 14:08:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3207163 00:10:08.686 14:08:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:09.259 lslocks: write error 00:10:09.259 14:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3206848 00:10:09.259 14:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3206848 ']' 00:10:09.259 14:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3206848 00:10:09.259 14:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:09.259 14:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.259 14:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3206848 00:10:09.519 14:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.519 14:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.519 14:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3206848' 00:10:09.519 killing process with pid 3206848 00:10:09.519 14:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3206848 00:10:09.519 14:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3206848 00:10:09.781 14:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3207163 00:10:09.781 14:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3207163 ']' 00:10:09.781 14:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3207163 00:10:09.781 14:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:09.781 14:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.781 14:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3207163 00:10:09.781 14:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.781 14:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.781 14:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3207163' 00:10:09.781 killing process with pid 3207163 00:10:09.781 14:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3207163 00:10:09.781 14:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3207163 00:10:10.044 00:10:10.044 real 0m2.902s 00:10:10.044 user 0m3.233s 00:10:10.044 sys 0m0.862s 00:10:10.044 14:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.044 14:08:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:10.044 ************************************ 00:10:10.044 END TEST locking_app_on_unlocked_coremask 00:10:10.044 ************************************ 00:10:10.044 14:08:14 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:10.044 14:08:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:10.044 14:08:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.044 14:08:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:10.044 ************************************ 00:10:10.044 START TEST locking_app_on_locked_coremask 00:10:10.044 ************************************ 00:10:10.044 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:10:10.044 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3207541 00:10:10.044 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3207541 /var/tmp/spdk.sock 00:10:10.044 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:10.044 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3207541 ']' 00:10:10.044 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.044 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.044 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.044 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.044 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:10.044 [2024-11-25 14:08:15.097556] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:10:10.044 [2024-11-25 14:08:15.097608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207541 ] 00:10:10.305 [2024-11-25 14:08:15.183712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.305 [2024-11-25 14:08:15.216298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.877 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.877 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:10.877 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3207741 00:10:10.877 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3207741 /var/tmp/spdk2.sock 00:10:10.877 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:10.877 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:10.877 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3207741 /var/tmp/spdk2.sock 00:10:10.877 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:10.877 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:10.877 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:10.877 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:10.877 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3207741 /var/tmp/spdk2.sock 00:10:10.877 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3207741 ']' 00:10:10.877 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:10.877 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.877 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:10.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:10.877 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.877 14:08:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:10.877 [2024-11-25 14:08:15.927834] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:10:10.877 [2024-11-25 14:08:15.927889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207741 ] 00:10:11.138 [2024-11-25 14:08:16.014178] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3207541 has claimed it. 00:10:11.138 [2024-11-25 14:08:16.014212] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:11.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3207741) - No such process 00:10:11.710 ERROR: process (pid: 3207741) is no longer running 00:10:11.710 14:08:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.710 14:08:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:11.710 14:08:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:11.710 14:08:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:11.710 14:08:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:11.710 14:08:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:11.710 14:08:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3207541 00:10:11.710 14:08:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3207541 00:10:11.710 14:08:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:11.971 lslocks: write error 00:10:11.971 14:08:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3207541 00:10:11.971 14:08:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3207541 ']' 00:10:11.971 14:08:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3207541 00:10:11.971 14:08:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:11.971 14:08:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.971 14:08:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3207541 00:10:12.232 14:08:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:12.232 14:08:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:12.232 14:08:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3207541' 00:10:12.232 killing process with pid 3207541 00:10:12.232 14:08:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3207541 00:10:12.232 14:08:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3207541 00:10:12.232 00:10:12.232 real 0m2.229s 00:10:12.232 user 0m2.507s 00:10:12.232 sys 0m0.636s 00:10:12.232 14:08:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.232 14:08:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:12.232 ************************************ 00:10:12.232 END TEST locking_app_on_locked_coremask 00:10:12.232 ************************************ 00:10:12.232 14:08:17 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:12.232 14:08:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:12.232 14:08:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.232 14:08:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:12.493 ************************************ 00:10:12.493 START TEST locking_overlapped_coremask 00:10:12.493 ************************************ 00:10:12.493 14:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:10:12.493 14:08:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3207985 00:10:12.493 14:08:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3207985 /var/tmp/spdk.sock 00:10:12.493 14:08:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:10:12.493 14:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3207985 ']' 00:10:12.493 14:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.493 14:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.493 14:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.493 14:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.493 14:08:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:12.493 [2024-11-25 14:08:17.392751] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:10:12.493 [2024-11-25 14:08:17.392805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207985 ] 00:10:12.493 [2024-11-25 14:08:17.480246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:12.493 [2024-11-25 14:08:17.514515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.493 [2024-11-25 14:08:17.514668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.493 [2024-11-25 14:08:17.514670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.434 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:13.434 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:13.434 14:08:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3208251 00:10:13.434 14:08:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3208251 /var/tmp/spdk2.sock 00:10:13.434 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:13.434 14:08:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:13.434 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3208251 /var/tmp/spdk2.sock 00:10:13.434 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:13.434 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.434 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:13.434 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.434 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3208251 /var/tmp/spdk2.sock 00:10:13.434 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3208251 ']' 00:10:13.434 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:13.434 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.434 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:13.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:13.434 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.434 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:13.434 [2024-11-25 14:08:18.249338] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:10:13.435 [2024-11-25 14:08:18.249391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208251 ] 00:10:13.435 [2024-11-25 14:08:18.362624] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3207985 has claimed it. 00:10:13.435 [2024-11-25 14:08:18.362665] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:14.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3208251) - No such process 00:10:14.005 ERROR: process (pid: 3208251) is no longer running 00:10:14.005 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.005 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:14.005 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:14.005 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:14.005 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:14.005 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:14.005 14:08:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:14.005 14:08:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:14.005 14:08:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:14.005 14:08:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:14.005 14:08:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3207985 00:10:14.005 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3207985 ']' 00:10:14.005 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3207985 00:10:14.005 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:10:14.005 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.005 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3207985 00:10:14.005 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:14.005 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:14.005 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3207985' 00:10:14.005 killing process with pid 3207985 00:10:14.005 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3207985 00:10:14.005 14:08:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3207985 00:10:14.264 00:10:14.264 real 0m1.780s 00:10:14.264 user 0m5.156s 00:10:14.264 sys 0m0.392s 00:10:14.264 14:08:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.264 14:08:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:14.264 ************************************ 00:10:14.264 END TEST locking_overlapped_coremask 00:10:14.264 ************************************ 00:10:14.264 14:08:19 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:14.264 14:08:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:14.264 14:08:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.264 14:08:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:14.264 ************************************ 00:10:14.264 START TEST locking_overlapped_coremask_via_rpc 00:10:14.264 ************************************ 00:10:14.264 14:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:10:14.264 14:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3208442 00:10:14.264 14:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3208442 /var/tmp/spdk.sock 00:10:14.264 14:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:14.264 14:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3208442 ']' 00:10:14.264 14:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.264 14:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.264 14:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.264 14:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.264 14:08:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:14.264 [2024-11-25 14:08:19.247558] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:10:14.264 [2024-11-25 14:08:19.247616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208442 ] 00:10:14.264 [2024-11-25 14:08:19.334270] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:14.264 [2024-11-25 14:08:19.334298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:14.524 [2024-11-25 14:08:19.370432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.524 [2024-11-25 14:08:19.370638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.524 [2024-11-25 14:08:19.370638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.107 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.107 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:15.107 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3208629 00:10:15.107 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3208629 /var/tmp/spdk2.sock 00:10:15.107 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3208629 ']' 00:10:15.107 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:15.107 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:15.107 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.107 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:15.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:15.107 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.107 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.107 [2024-11-25 14:08:20.104195] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:10:15.107 [2024-11-25 14:08:20.104253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208629 ] 00:10:15.368 [2024-11-25 14:08:20.215617] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:15.368 [2024-11-25 14:08:20.215653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:15.368 [2024-11-25 14:08:20.289052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.368 [2024-11-25 14:08:20.292285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.368 [2024-11-25 14:08:20.292286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:15.941 [2024-11-25 14:08:20.905242] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3208442 has claimed it. 00:10:15.941 request: 00:10:15.941 { 00:10:15.941 "method": "framework_enable_cpumask_locks", 00:10:15.941 "req_id": 1 00:10:15.941 } 00:10:15.941 Got JSON-RPC error response 00:10:15.941 response: 00:10:15.941 { 00:10:15.941 "code": -32603, 00:10:15.941 "message": "Failed to claim CPU core: 2" 00:10:15.941 } 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3208442 /var/tmp/spdk.sock 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3208442 ']' 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.941 14:08:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.203 14:08:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.203 14:08:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:16.203 14:08:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3208629 /var/tmp/spdk2.sock 00:10:16.203 14:08:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3208629 ']' 00:10:16.203 14:08:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:16.203 14:08:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.203 14:08:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:16.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:16.203 14:08:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.203 14:08:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.203 14:08:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.203 14:08:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:16.203 14:08:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:16.203 14:08:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:16.203 14:08:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:16.203 14:08:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:16.203 00:10:16.203 real 0m2.090s 00:10:16.203 user 0m0.874s 00:10:16.203 sys 0m0.148s 00:10:16.203 14:08:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.203 14:08:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.203 ************************************ 00:10:16.203 END TEST locking_overlapped_coremask_via_rpc 00:10:16.203 ************************************ 00:10:16.464 14:08:21 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:16.465 14:08:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3208442 ]] 00:10:16.465 14:08:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3208442 00:10:16.465 14:08:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3208442 ']' 00:10:16.465 14:08:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3208442 00:10:16.465 14:08:21 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:16.465 14:08:21 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:16.465 14:08:21 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3208442 00:10:16.465 14:08:21 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:16.465 14:08:21 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:16.465 14:08:21 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3208442' 00:10:16.465 killing process with pid 3208442 00:10:16.465 14:08:21 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3208442 00:10:16.465 14:08:21 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3208442 00:10:16.725 14:08:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3208629 ]] 00:10:16.725 14:08:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3208629 00:10:16.725 14:08:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3208629 ']' 00:10:16.725 14:08:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3208629 00:10:16.725 14:08:21 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:16.725 14:08:21 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:16.725 14:08:21 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3208629 00:10:16.725 14:08:21 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:16.725 14:08:21 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:16.725 14:08:21 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3208629' 00:10:16.725 killing process with pid 3208629 00:10:16.725 14:08:21 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3208629 00:10:16.725 14:08:21 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3208629 00:10:16.986 14:08:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:16.986 14:08:21 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:16.986 14:08:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3208442 ]] 00:10:16.986 14:08:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3208442 00:10:16.986 14:08:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3208442 ']' 00:10:16.986 14:08:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3208442 00:10:16.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3208442) - No such process 00:10:16.986 14:08:21 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3208442 is not found' 00:10:16.986 Process with pid 3208442 is not found 00:10:16.986 14:08:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3208629 ]] 00:10:16.986 14:08:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3208629 00:10:16.986 14:08:21 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3208629 ']' 00:10:16.986 14:08:21 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3208629 00:10:16.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3208629) - No such process 00:10:16.986 14:08:21 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3208629 is not found' 00:10:16.986 Process with pid 3208629 is not found 00:10:16.986 14:08:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:16.986 00:10:16.986 real 0m16.293s 00:10:16.986 user 0m28.417s 00:10:16.986 sys 0m5.006s 00:10:16.987 14:08:21 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.987 14:08:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:16.987 ************************************ 00:10:16.987 END TEST cpu_locks 00:10:16.987 ************************************ 00:10:16.987 00:10:16.987 real 0m42.307s 00:10:16.987 user 1m23.771s 00:10:16.987 sys 0m8.403s 00:10:16.987 14:08:21 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.987 14:08:21 event -- common/autotest_common.sh@10 -- # set +x 00:10:16.987 ************************************ 00:10:16.987 END TEST event 00:10:16.987 ************************************ 00:10:16.987 14:08:21 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:10:16.987 14:08:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:16.987 14:08:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.987 14:08:21 -- common/autotest_common.sh@10 -- # set +x 00:10:16.987 ************************************ 00:10:16.987 START TEST thread 00:10:16.987 ************************************ 00:10:16.987 14:08:21 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:10:16.987 * Looking for test storage... 00:10:16.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:10:16.987 14:08:22 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:16.987 14:08:22 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:10:16.987 14:08:22 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:17.248 14:08:22 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:17.248 14:08:22 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.248 14:08:22 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.248 14:08:22 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.248 14:08:22 thread -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.248 14:08:22 thread -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.248 14:08:22 thread -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.248 14:08:22 thread -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.248 14:08:22 thread -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.248 14:08:22 thread -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.248 14:08:22 thread -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.248 14:08:22 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.248 14:08:22 thread -- scripts/common.sh@344 -- # case "$op" in 00:10:17.248 14:08:22 thread -- scripts/common.sh@345 -- # : 1 00:10:17.248 14:08:22 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.248 14:08:22 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.248 14:08:22 thread -- scripts/common.sh@365 -- # decimal 1 00:10:17.248 14:08:22 thread -- scripts/common.sh@353 -- # local d=1 00:10:17.248 14:08:22 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.248 14:08:22 thread -- scripts/common.sh@355 -- # echo 1 00:10:17.248 14:08:22 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.248 14:08:22 thread -- scripts/common.sh@366 -- # decimal 2 00:10:17.248 14:08:22 thread -- scripts/common.sh@353 -- # local d=2 00:10:17.248 14:08:22 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.248 14:08:22 thread -- scripts/common.sh@355 -- # echo 2 00:10:17.248 14:08:22 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.248 14:08:22 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.248 14:08:22 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.248 14:08:22 thread -- scripts/common.sh@368 -- # return 0 00:10:17.248 14:08:22 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.248 14:08:22 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:17.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.248 --rc genhtml_branch_coverage=1 00:10:17.248 --rc genhtml_function_coverage=1 00:10:17.248 --rc genhtml_legend=1 00:10:17.248 --rc geninfo_all_blocks=1 00:10:17.248 --rc geninfo_unexecuted_blocks=1 00:10:17.248 00:10:17.248 ' 00:10:17.248 14:08:22 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:17.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.248 --rc genhtml_branch_coverage=1 00:10:17.248 --rc genhtml_function_coverage=1 00:10:17.248 --rc genhtml_legend=1 00:10:17.248 --rc geninfo_all_blocks=1 00:10:17.248 --rc geninfo_unexecuted_blocks=1 00:10:17.248 00:10:17.248 ' 00:10:17.248 14:08:22 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:17.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.248 --rc genhtml_branch_coverage=1 00:10:17.248 --rc genhtml_function_coverage=1 00:10:17.248 --rc genhtml_legend=1 00:10:17.248 --rc geninfo_all_blocks=1 00:10:17.248 --rc geninfo_unexecuted_blocks=1 00:10:17.248 00:10:17.248 ' 00:10:17.248 14:08:22 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:17.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.248 --rc genhtml_branch_coverage=1 00:10:17.248 --rc genhtml_function_coverage=1 00:10:17.248 --rc genhtml_legend=1 00:10:17.248 --rc geninfo_all_blocks=1 00:10:17.248 --rc geninfo_unexecuted_blocks=1 00:10:17.248 00:10:17.248 ' 00:10:17.248 14:08:22 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:17.248 14:08:22 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:17.248 14:08:22 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.248 14:08:22 thread -- common/autotest_common.sh@10 -- # set +x 00:10:17.248 ************************************ 00:10:17.248 START TEST thread_poller_perf 00:10:17.248 ************************************ 00:10:17.248 14:08:22 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:17.248 [2024-11-25 14:08:22.217372] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:10:17.248 [2024-11-25 14:08:22.217459] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209075 ] 00:10:17.248 [2024-11-25 14:08:22.305284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.248 [2024-11-25 14:08:22.336777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.248 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:18.628 [2024-11-25T13:08:23.718Z] ====================================== 00:10:18.628 [2024-11-25T13:08:23.718Z] busy:2408442544 (cyc) 00:10:18.628 [2024-11-25T13:08:23.718Z] total_run_count: 419000 00:10:18.628 [2024-11-25T13:08:23.718Z] tsc_hz: 2400000000 (cyc) 00:10:18.628 [2024-11-25T13:08:23.718Z] ====================================== 00:10:18.628 [2024-11-25T13:08:23.718Z] poller_cost: 5748 (cyc), 2395 (nsec) 00:10:18.628 00:10:18.628 real 0m1.175s 00:10:18.628 user 0m1.093s 00:10:18.628 sys 0m0.078s 00:10:18.628 14:08:23 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.628 14:08:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:18.628 ************************************ 00:10:18.628 END TEST thread_poller_perf 00:10:18.628 ************************************ 00:10:18.628 14:08:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:18.628 14:08:23 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:18.628 14:08:23 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.628 14:08:23 thread -- common/autotest_common.sh@10 -- # set +x 00:10:18.628 ************************************ 00:10:18.628 START TEST thread_poller_perf 00:10:18.628 ************************************ 00:10:18.628 14:08:23 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:18.628 [2024-11-25 14:08:23.469885] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:10:18.628 [2024-11-25 14:08:23.469989] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209429 ] 00:10:18.628 [2024-11-25 14:08:23.557913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.628 [2024-11-25 14:08:23.591393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.628 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:19.576 [2024-11-25T13:08:24.666Z] ====================================== 00:10:19.576 [2024-11-25T13:08:24.666Z] busy:2401685966 (cyc) 00:10:19.576 [2024-11-25T13:08:24.666Z] total_run_count: 5545000 00:10:19.576 [2024-11-25T13:08:24.666Z] tsc_hz: 2400000000 (cyc) 00:10:19.576 [2024-11-25T13:08:24.666Z] ====================================== 00:10:19.576 [2024-11-25T13:08:24.666Z] poller_cost: 433 (cyc), 180 (nsec) 00:10:19.576 00:10:19.576 real 0m1.171s 00:10:19.576 user 0m1.082s 00:10:19.576 sys 0m0.085s 00:10:19.576 14:08:24 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.576 14:08:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:19.576 ************************************ 00:10:19.576 END TEST thread_poller_perf 00:10:19.576 ************************************ 00:10:19.576 14:08:24 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:19.576 00:10:19.576 real 0m2.706s 00:10:19.576 user 0m2.348s 00:10:19.576 sys 0m0.372s 00:10:19.576 14:08:24 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.576 14:08:24 thread -- common/autotest_common.sh@10 -- # set +x 00:10:19.576 ************************************ 00:10:19.576 END TEST thread 00:10:19.576 ************************************ 00:10:19.837 14:08:24 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:19.837 14:08:24 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:10:19.837 14:08:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:19.837 14:08:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.837 14:08:24 -- common/autotest_common.sh@10 -- # set +x 00:10:19.837 ************************************ 00:10:19.837 START TEST app_cmdline 00:10:19.837 ************************************ 00:10:19.837 14:08:24 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:10:19.837 * Looking for test storage... 00:10:19.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:19.837 14:08:24 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:19.837 14:08:24 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:10:19.837 14:08:24 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:19.837 14:08:24 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:19.837 14:08:24 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.837 14:08:24 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.837 14:08:24 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.837 14:08:24 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.837 14:08:24 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.837 14:08:24 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.837 14:08:24 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.837 14:08:24 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.837 14:08:24 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.837 14:08:24 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.837 14:08:24 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.837 14:08:24 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:19.837 14:08:24 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:19.837 14:08:24 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.837 14:08:24 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.837 14:08:24 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:19.837 14:08:24 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:19.837 14:08:24 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.837 14:08:24 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:19.837 14:08:24 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.837 14:08:24 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:20.096 14:08:24 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:20.096 14:08:24 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.096 14:08:24 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:20.096 14:08:24 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.096 14:08:24 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.096 14:08:24 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.096 14:08:24 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:20.096 14:08:24 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.096 14:08:24 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:20.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.097 --rc genhtml_branch_coverage=1 00:10:20.097 --rc genhtml_function_coverage=1 00:10:20.097 --rc genhtml_legend=1 00:10:20.097 --rc geninfo_all_blocks=1 00:10:20.097 --rc geninfo_unexecuted_blocks=1 00:10:20.097 00:10:20.097 ' 00:10:20.097 14:08:24 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:20.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.097 --rc genhtml_branch_coverage=1 00:10:20.097 --rc genhtml_function_coverage=1 00:10:20.097 --rc genhtml_legend=1 00:10:20.097 --rc geninfo_all_blocks=1 00:10:20.097 --rc geninfo_unexecuted_blocks=1 00:10:20.097 00:10:20.097 ' 00:10:20.097 14:08:24 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:20.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.097 --rc genhtml_branch_coverage=1 00:10:20.097 --rc genhtml_function_coverage=1 00:10:20.097 --rc genhtml_legend=1 00:10:20.097 --rc geninfo_all_blocks=1 00:10:20.097 --rc geninfo_unexecuted_blocks=1 00:10:20.097 00:10:20.097 ' 00:10:20.097 14:08:24 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:20.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.097 --rc genhtml_branch_coverage=1 00:10:20.097 --rc genhtml_function_coverage=1 00:10:20.097 --rc genhtml_legend=1 00:10:20.097 --rc geninfo_all_blocks=1 00:10:20.097 --rc geninfo_unexecuted_blocks=1 00:10:20.097 00:10:20.097 ' 00:10:20.097 14:08:24 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:20.097 14:08:24 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3209830 00:10:20.097 14:08:24 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3209830 00:10:20.097 14:08:24 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:20.097 14:08:24 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3209830 ']' 00:10:20.097 14:08:24 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.097 14:08:24 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.097 14:08:24 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.097 14:08:24 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.097 14:08:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:20.097 [2024-11-25 14:08:24.993795] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:10:20.097 [2024-11-25 14:08:24.993866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209830 ] 00:10:20.097 [2024-11-25 14:08:25.082328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.097 [2024-11-25 14:08:25.117194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.036 14:08:25 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:21.036 14:08:25 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:10:21.036 14:08:25 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:10:21.036 { 00:10:21.036 "version": "SPDK v25.01-pre git sha1 9d382c252", 00:10:21.036 "fields": { 00:10:21.036 "major": 25, 00:10:21.036 "minor": 1, 00:10:21.036 "patch": 0, 00:10:21.036 "suffix": "-pre", 00:10:21.036 "commit": "9d382c252" 00:10:21.036 } 00:10:21.036 } 00:10:21.036 14:08:25 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:21.036 14:08:25 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:21.036 14:08:25 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:21.036 14:08:25 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:21.037 14:08:25 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:21.037 14:08:25 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.037 14:08:25 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:21.037 14:08:25 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:21.037 14:08:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:21.037 14:08:25 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.037 14:08:25 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:21.037 14:08:25 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:21.037 14:08:25 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:21.037 14:08:25 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:10:21.037 14:08:25 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:21.037 14:08:25 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:21.037 14:08:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.037 14:08:25 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:21.037 14:08:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.037 14:08:25 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:21.037 14:08:25 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:21.037 14:08:25 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:21.037 14:08:25 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:21.037 14:08:25 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:21.298 request: 00:10:21.298 { 00:10:21.298 "method": "env_dpdk_get_mem_stats", 00:10:21.298 "req_id": 1 00:10:21.298 } 00:10:21.298 Got JSON-RPC error response 00:10:21.298 response: 00:10:21.298 { 00:10:21.298 "code": -32601, 00:10:21.298 "message": "Method not found" 00:10:21.298 } 00:10:21.298 14:08:26 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:10:21.298 14:08:26 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:21.298 14:08:26 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:21.298 14:08:26 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:21.298 14:08:26 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3209830 00:10:21.298 14:08:26 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3209830 ']' 00:10:21.298 14:08:26 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3209830 00:10:21.298 14:08:26 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:10:21.298 14:08:26 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.298 14:08:26 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3209830 00:10:21.298 14:08:26 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:21.298 14:08:26 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:21.298 14:08:26 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3209830' 00:10:21.298 killing process with pid 3209830 00:10:21.298 14:08:26 app_cmdline -- common/autotest_common.sh@973 -- # kill 3209830 00:10:21.298 14:08:26 app_cmdline -- common/autotest_common.sh@978 -- # wait 3209830 00:10:21.558 00:10:21.558 real 0m1.654s 00:10:21.558 user 0m1.996s 00:10:21.558 sys 0m0.418s 00:10:21.558 14:08:26 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.558 14:08:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:21.558 ************************************ 00:10:21.558 END TEST app_cmdline 00:10:21.558 ************************************ 00:10:21.558 14:08:26 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:10:21.558 14:08:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:21.558 14:08:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.558 14:08:26 -- common/autotest_common.sh@10 -- # set +x 00:10:21.558 ************************************ 00:10:21.559 START TEST version 00:10:21.559 ************************************ 00:10:21.559 14:08:26 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:10:21.559 * Looking for test storage... 00:10:21.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:21.559 14:08:26 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:21.559 14:08:26 version -- common/autotest_common.sh@1693 -- # lcov --version 00:10:21.559 14:08:26 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:21.559 14:08:26 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:21.559 14:08:26 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:21.819 14:08:26 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:21.819 14:08:26 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:21.819 14:08:26 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.819 14:08:26 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:21.819 14:08:26 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:21.819 14:08:26 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:21.819 14:08:26 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:21.819 14:08:26 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:21.819 14:08:26 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:21.819 14:08:26 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:21.819 14:08:26 version -- scripts/common.sh@344 -- # case "$op" in 00:10:21.819 14:08:26 version -- scripts/common.sh@345 -- # : 1 00:10:21.819 14:08:26 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:21.819 14:08:26 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.819 14:08:26 version -- scripts/common.sh@365 -- # decimal 1 00:10:21.819 14:08:26 version -- scripts/common.sh@353 -- # local d=1 00:10:21.819 14:08:26 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.819 14:08:26 version -- scripts/common.sh@355 -- # echo 1 00:10:21.819 14:08:26 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:21.819 14:08:26 version -- scripts/common.sh@366 -- # decimal 2 00:10:21.819 14:08:26 version -- scripts/common.sh@353 -- # local d=2 00:10:21.819 14:08:26 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.819 14:08:26 version -- scripts/common.sh@355 -- # echo 2 00:10:21.819 14:08:26 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:21.819 14:08:26 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:21.819 14:08:26 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:21.819 14:08:26 version -- scripts/common.sh@368 -- # return 0 00:10:21.819 14:08:26 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.819 14:08:26 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:21.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.819 --rc genhtml_branch_coverage=1 00:10:21.819 --rc genhtml_function_coverage=1 00:10:21.819 --rc genhtml_legend=1 00:10:21.819 --rc geninfo_all_blocks=1 00:10:21.819 --rc geninfo_unexecuted_blocks=1 00:10:21.819 00:10:21.819 ' 00:10:21.819 14:08:26 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:21.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.819 --rc genhtml_branch_coverage=1 00:10:21.819 --rc genhtml_function_coverage=1 00:10:21.819 --rc genhtml_legend=1 00:10:21.819 --rc geninfo_all_blocks=1 00:10:21.819 --rc geninfo_unexecuted_blocks=1 00:10:21.819 00:10:21.820 ' 00:10:21.820 14:08:26 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:21.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.820 --rc genhtml_branch_coverage=1 00:10:21.820 --rc genhtml_function_coverage=1 00:10:21.820 --rc genhtml_legend=1 00:10:21.820 --rc geninfo_all_blocks=1 00:10:21.820 --rc geninfo_unexecuted_blocks=1 00:10:21.820 00:10:21.820 ' 00:10:21.820 14:08:26 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:21.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.820 --rc genhtml_branch_coverage=1 00:10:21.820 --rc genhtml_function_coverage=1 00:10:21.820 --rc genhtml_legend=1 00:10:21.820 --rc geninfo_all_blocks=1 00:10:21.820 --rc geninfo_unexecuted_blocks=1 00:10:21.820 00:10:21.820 ' 00:10:21.820 14:08:26 version -- app/version.sh@17 -- # get_header_version major 00:10:21.820 14:08:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:21.820 14:08:26 version -- app/version.sh@14 -- # cut -f2 00:10:21.820 14:08:26 version -- app/version.sh@14 -- # tr -d '"' 00:10:21.820 14:08:26 version -- app/version.sh@17 -- # major=25 00:10:21.820 14:08:26 version -- app/version.sh@18 -- # get_header_version minor 00:10:21.820 14:08:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:21.820 14:08:26 version -- app/version.sh@14 -- # cut -f2 00:10:21.820 14:08:26 version -- app/version.sh@14 -- # tr -d '"' 00:10:21.820 14:08:26 version -- app/version.sh@18 -- # minor=1 00:10:21.820 14:08:26 version -- app/version.sh@19 -- # get_header_version patch 00:10:21.820 14:08:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:21.820 14:08:26 version -- app/version.sh@14 -- # cut -f2 00:10:21.820 14:08:26 version -- app/version.sh@14 -- # tr -d '"' 00:10:21.820 14:08:26 version -- app/version.sh@19 -- # patch=0 00:10:21.820 14:08:26 version -- app/version.sh@20 -- # get_header_version suffix 00:10:21.820 14:08:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:21.820 14:08:26 version -- app/version.sh@14 -- # cut -f2 00:10:21.820 14:08:26 version -- app/version.sh@14 -- # tr -d '"' 00:10:21.820 14:08:26 version -- app/version.sh@20 -- # suffix=-pre 00:10:21.820 14:08:26 version -- app/version.sh@22 -- # version=25.1 00:10:21.820 14:08:26 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:21.820 14:08:26 version -- app/version.sh@28 -- # version=25.1rc0 00:10:21.820 14:08:26 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:21.820 14:08:26 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:21.820 14:08:26 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:21.820 14:08:26 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:21.820 00:10:21.820 real 0m0.276s 00:10:21.820 user 0m0.169s 00:10:21.820 sys 0m0.154s 00:10:21.820 14:08:26 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.820 14:08:26 version -- common/autotest_common.sh@10 -- # set +x 00:10:21.820 ************************************ 00:10:21.820 END TEST version 00:10:21.820 ************************************ 00:10:21.820 14:08:26 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:21.820 14:08:26 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:10:21.820 14:08:26 -- spdk/autotest.sh@194 -- # uname -s 00:10:21.820 14:08:26 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:10:21.820 14:08:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:21.820 14:08:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:21.820 14:08:26 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:10:21.820 14:08:26 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:10:21.820 14:08:26 -- spdk/autotest.sh@260 -- # timing_exit lib 00:10:21.820 14:08:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:21.820 14:08:26 -- common/autotest_common.sh@10 -- # set +x 00:10:21.820 14:08:26 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:10:21.820 14:08:26 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:10:21.820 14:08:26 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:10:21.820 14:08:26 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:10:21.820 14:08:26 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:10:21.820 14:08:26 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:10:21.820 14:08:26 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:21.820 14:08:26 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:21.820 14:08:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.820 14:08:26 -- common/autotest_common.sh@10 -- # set +x 00:10:21.820 ************************************ 00:10:21.820 START TEST nvmf_tcp 00:10:21.820 ************************************ 00:10:21.820 14:08:26 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:22.081 * Looking for test storage... 00:10:22.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:22.081 14:08:26 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:22.081 14:08:26 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:10:22.081 14:08:26 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:22.081 14:08:27 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.081 14:08:27 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:10:22.081 14:08:27 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.081 14:08:27 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:22.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.081 --rc genhtml_branch_coverage=1 00:10:22.081 --rc genhtml_function_coverage=1 00:10:22.081 --rc genhtml_legend=1 00:10:22.081 --rc geninfo_all_blocks=1 00:10:22.081 --rc geninfo_unexecuted_blocks=1 00:10:22.081 00:10:22.081 ' 00:10:22.081 14:08:27 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:22.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.082 --rc genhtml_branch_coverage=1 00:10:22.082 --rc genhtml_function_coverage=1 00:10:22.082 --rc genhtml_legend=1 00:10:22.082 --rc geninfo_all_blocks=1 00:10:22.082 --rc geninfo_unexecuted_blocks=1 00:10:22.082 00:10:22.082 ' 00:10:22.082 14:08:27 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:22.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.082 --rc genhtml_branch_coverage=1 00:10:22.082 --rc genhtml_function_coverage=1 00:10:22.082 --rc genhtml_legend=1 00:10:22.082 --rc geninfo_all_blocks=1 00:10:22.082 --rc geninfo_unexecuted_blocks=1 00:10:22.082 00:10:22.082 ' 00:10:22.082 14:08:27 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:22.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.082 --rc genhtml_branch_coverage=1 00:10:22.082 --rc genhtml_function_coverage=1 00:10:22.082 --rc genhtml_legend=1 00:10:22.082 --rc geninfo_all_blocks=1 00:10:22.082 --rc geninfo_unexecuted_blocks=1 00:10:22.082 00:10:22.082 ' 00:10:22.082 14:08:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:10:22.082 14:08:27 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:22.082 14:08:27 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:10:22.082 14:08:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:22.082 14:08:27 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.082 14:08:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:22.082 ************************************ 00:10:22.082 START TEST nvmf_target_core 00:10:22.082 ************************************ 00:10:22.082 14:08:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:10:22.344 * Looking for test storage... 00:10:22.344 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:22.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.344 --rc genhtml_branch_coverage=1 00:10:22.344 --rc genhtml_function_coverage=1 00:10:22.344 --rc genhtml_legend=1 00:10:22.344 --rc geninfo_all_blocks=1 00:10:22.344 --rc geninfo_unexecuted_blocks=1 00:10:22.344 00:10:22.344 ' 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:22.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.344 --rc genhtml_branch_coverage=1 00:10:22.344 --rc genhtml_function_coverage=1 00:10:22.344 --rc genhtml_legend=1 00:10:22.344 --rc geninfo_all_blocks=1 00:10:22.344 --rc geninfo_unexecuted_blocks=1 00:10:22.344 00:10:22.344 ' 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:22.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.344 --rc genhtml_branch_coverage=1 00:10:22.344 --rc genhtml_function_coverage=1 00:10:22.344 --rc genhtml_legend=1 00:10:22.344 --rc geninfo_all_blocks=1 00:10:22.344 --rc geninfo_unexecuted_blocks=1 00:10:22.344 00:10:22.344 ' 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:22.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.344 --rc genhtml_branch_coverage=1 00:10:22.344 --rc genhtml_function_coverage=1 00:10:22.344 --rc genhtml_legend=1 00:10:22.344 --rc geninfo_all_blocks=1 00:10:22.344 --rc geninfo_unexecuted_blocks=1 00:10:22.344 00:10:22.344 ' 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.344 14:08:27 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.345 14:08:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.345 14:08:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.345 14:08:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.345 14:08:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:10:22.345 14:08:27 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.345 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:10:22.345 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:22.345 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:22.345 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.345 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.345 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.345 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:22.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:22.345 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:22.345 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:22.345 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:22.345 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:22.345 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:10:22.345 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:10:22.345 14:08:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:22.345 14:08:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:22.345 14:08:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.345 14:08:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:22.345 ************************************ 00:10:22.345 START TEST nvmf_abort 00:10:22.345 ************************************ 00:10:22.345 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:22.606 * Looking for test storage... 00:10:22.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:22.606 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:22.606 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:10:22.606 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:22.606 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:22.606 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.606 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.606 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.606 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.606 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.606 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.606 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.606 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.606 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.606 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.606 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.606 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:10:22.606 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:10:22.606 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.606 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.606 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:10:22.606 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:10:22.606 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.606 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:10:22.606 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:22.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.607 --rc genhtml_branch_coverage=1 00:10:22.607 --rc genhtml_function_coverage=1 00:10:22.607 --rc genhtml_legend=1 00:10:22.607 --rc geninfo_all_blocks=1 00:10:22.607 --rc geninfo_unexecuted_blocks=1 00:10:22.607 00:10:22.607 ' 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:22.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.607 --rc genhtml_branch_coverage=1 00:10:22.607 --rc genhtml_function_coverage=1 00:10:22.607 --rc genhtml_legend=1 00:10:22.607 --rc geninfo_all_blocks=1 00:10:22.607 --rc geninfo_unexecuted_blocks=1 00:10:22.607 00:10:22.607 ' 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:22.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.607 --rc genhtml_branch_coverage=1 00:10:22.607 --rc genhtml_function_coverage=1 00:10:22.607 --rc genhtml_legend=1 00:10:22.607 --rc geninfo_all_blocks=1 00:10:22.607 --rc geninfo_unexecuted_blocks=1 00:10:22.607 00:10:22.607 ' 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:22.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.607 --rc genhtml_branch_coverage=1 00:10:22.607 --rc genhtml_function_coverage=1 00:10:22.607 --rc genhtml_legend=1 00:10:22.607 --rc geninfo_all_blocks=1 00:10:22.607 --rc geninfo_unexecuted_blocks=1 00:10:22.607 00:10:22.607 ' 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:22.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:10:22.607 14:08:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:30.753 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:30.753 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:30.753 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:30.753 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:30.753 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:30.754 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:30.754 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:10:30.754 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:30.754 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:30.754 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:30.754 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:30.754 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:30.754 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:30.754 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:30.754 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:30.754 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:30.754 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:30.754 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:30.754 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:30.754 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:30.754 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:30.754 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:30.754 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:30.754 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:30.754 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:30.754 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:30.754 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:30.754 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:30.754 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:30.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:30.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.527 ms 00:10:30.754 00:10:30.754 --- 10.0.0.2 ping statistics --- 00:10:30.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.754 rtt min/avg/max/mdev = 0.527/0.527/0.527/0.000 ms 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:30.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:30.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:10:30.754 00:10:30.754 --- 10.0.0.1 ping statistics --- 00:10:30.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.754 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3214322 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3214322 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3214322 ']' 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:30.754 14:08:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:30.754 [2024-11-25 14:08:35.217478] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:10:30.754 [2024-11-25 14:08:35.217543] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.754 [2024-11-25 14:08:35.316394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:30.754 [2024-11-25 14:08:35.370915] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:30.754 [2024-11-25 14:08:35.370965] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:30.754 [2024-11-25 14:08:35.370974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:30.754 [2024-11-25 14:08:35.370981] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:30.754 [2024-11-25 14:08:35.370987] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:30.754 [2024-11-25 14:08:35.372860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.754 [2024-11-25 14:08:35.373025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.754 [2024-11-25 14:08:35.373025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:31.016 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.016 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:10:31.016 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:31.016 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:31.016 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:31.016 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.016 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:31.016 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.016 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:31.016 [2024-11-25 14:08:36.092326] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:31.016 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.016 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:31.016 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.016 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:31.278 Malloc0 00:10:31.278 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.278 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:31.278 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.278 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:31.278 Delay0 00:10:31.278 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.278 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:31.278 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.278 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:31.278 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.278 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:31.278 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.278 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:31.278 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.278 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:31.278 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.278 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:31.278 [2024-11-25 14:08:36.174682] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:31.278 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.278 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:31.278 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.278 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:31.278 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.278 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:31.278 [2024-11-25 14:08:36.315330] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:33.834 Initializing NVMe Controllers 00:10:33.834 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:33.834 controller IO queue size 128 less than required 00:10:33.834 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:33.834 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:33.834 Initialization complete. Launching workers. 00:10:33.834 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27871 00:10:33.834 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27932, failed to submit 62 00:10:33.834 success 27875, unsuccessful 57, failed 0 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:33.834 rmmod nvme_tcp 00:10:33.834 rmmod nvme_fabrics 00:10:33.834 rmmod nvme_keyring 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3214322 ']' 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3214322 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3214322 ']' 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3214322 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3214322 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3214322' 00:10:33.834 killing process with pid 3214322 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3214322 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3214322 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.834 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.750 14:08:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:35.750 00:10:35.750 real 0m13.349s 00:10:35.750 user 0m13.858s 00:10:35.750 sys 0m6.588s 00:10:35.750 14:08:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.750 14:08:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:35.750 ************************************ 00:10:35.750 END TEST nvmf_abort 00:10:35.750 ************************************ 00:10:35.750 14:08:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:35.750 14:08:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:35.750 14:08:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.750 14:08:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:35.750 ************************************ 00:10:35.750 START TEST nvmf_ns_hotplug_stress 00:10:35.750 ************************************ 00:10:35.750 14:08:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:36.013 * Looking for test storage... 00:10:36.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:36.013 14:08:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:36.013 14:08:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:10:36.013 14:08:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:36.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.013 --rc genhtml_branch_coverage=1 00:10:36.013 --rc genhtml_function_coverage=1 00:10:36.013 --rc genhtml_legend=1 00:10:36.013 --rc geninfo_all_blocks=1 00:10:36.013 --rc geninfo_unexecuted_blocks=1 00:10:36.013 00:10:36.013 ' 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:36.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.013 --rc genhtml_branch_coverage=1 00:10:36.013 --rc genhtml_function_coverage=1 00:10:36.013 --rc genhtml_legend=1 00:10:36.013 --rc geninfo_all_blocks=1 00:10:36.013 --rc geninfo_unexecuted_blocks=1 00:10:36.013 00:10:36.013 ' 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:36.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.013 --rc genhtml_branch_coverage=1 00:10:36.013 --rc genhtml_function_coverage=1 00:10:36.013 --rc genhtml_legend=1 00:10:36.013 --rc geninfo_all_blocks=1 00:10:36.013 --rc geninfo_unexecuted_blocks=1 00:10:36.013 00:10:36.013 ' 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:36.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.013 --rc genhtml_branch_coverage=1 00:10:36.013 --rc genhtml_function_coverage=1 00:10:36.013 --rc genhtml_legend=1 00:10:36.013 --rc geninfo_all_blocks=1 00:10:36.013 --rc geninfo_unexecuted_blocks=1 00:10:36.013 00:10:36.013 ' 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.013 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:36.014 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.014 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:10:36.014 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:36.014 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:36.014 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.014 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.014 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.014 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:36.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:36.014 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:36.014 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:36.014 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:36.014 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:36.014 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:36.014 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:36.014 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:36.014 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:36.014 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:36.014 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:36.014 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.014 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.014 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.014 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:36.014 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:36.014 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:10:36.014 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:44.160 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:44.160 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:44.161 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:44.161 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:44.161 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:44.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:44.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:10:44.161 00:10:44.161 --- 10.0.0.2 ping statistics --- 00:10:44.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.161 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:44.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:44.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:10:44.161 00:10:44.161 --- 10.0.0.1 ping statistics --- 00:10:44.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.161 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3219091 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3219091 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3219091 ']' 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.161 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.161 [2024-11-25 14:08:48.620569] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:10:44.161 [2024-11-25 14:08:48.620640] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.161 [2024-11-25 14:08:48.722797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:44.161 [2024-11-25 14:08:48.774625] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:44.161 [2024-11-25 14:08:48.774681] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:44.161 [2024-11-25 14:08:48.774689] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:44.161 [2024-11-25 14:08:48.774697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:44.161 [2024-11-25 14:08:48.774703] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:44.161 [2024-11-25 14:08:48.776773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:44.161 [2024-11-25 14:08:48.776937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.161 [2024-11-25 14:08:48.776938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:44.422 14:08:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.422 14:08:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:10:44.422 14:08:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:44.422 14:08:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:44.422 14:08:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.422 14:08:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.422 14:08:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:44.422 14:08:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:44.682 [2024-11-25 14:08:49.654000] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:44.682 14:08:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:44.942 14:08:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.203 [2024-11-25 14:08:50.049100] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.203 14:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:45.203 14:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:45.464 Malloc0 00:10:45.464 14:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:45.725 Delay0 00:10:45.725 14:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.985 14:08:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:45.985 NULL1 00:10:46.246 14:08:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:46.246 14:08:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:46.246 14:08:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3219748 00:10:46.246 14:08:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:10:46.246 14:08:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.629 Read completed with error (sct=0, sc=11) 00:10:47.630 14:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.630 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.630 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.630 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.630 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.630 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.630 14:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:47.630 14:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:47.890 true 00:10:47.890 14:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:10:47.890 14:08:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.831 14:08:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.831 14:08:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:48.831 14:08:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:49.091 true 00:10:49.091 14:08:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:10:49.091 14:08:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.351 14:08:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.351 14:08:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:49.351 14:08:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:49.611 true 00:10:49.612 14:08:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:10:49.612 14:08:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.998 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.998 14:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.998 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.998 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.998 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.998 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.998 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:50.998 14:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:50.998 14:08:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:50.998 true 00:10:50.998 14:08:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:10:50.998 14:08:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.063 14:08:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.063 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.063 14:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:52.063 14:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:52.349 true 00:10:52.349 14:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:10:52.349 14:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.610 14:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.610 14:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:52.610 14:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:52.871 true 00:10:52.871 14:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:10:52.871 14:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.131 14:08:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.131 14:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:53.131 14:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:53.391 true 00:10:53.391 14:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:10:53.391 14:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.652 14:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.652 14:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:53.652 14:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:53.912 true 00:10:53.912 14:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:10:53.912 14:08:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.173 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.173 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:54.173 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:54.432 true 00:10:54.432 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:10:54.432 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.692 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.952 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:54.952 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:54.952 true 00:10:54.952 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:10:54.952 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:56.335 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:56.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:56.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:56.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:56.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:56.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:56.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:56.335 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:56.335 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:56.595 true 00:10:56.595 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:10:56.595 14:09:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.535 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.535 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:57.535 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:57.795 true 00:10:57.795 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:10:57.795 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.054 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.054 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:58.054 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:58.313 true 00:10:58.313 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:10:58.313 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.572 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.572 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:58.572 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:58.833 true 00:10:58.833 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:10:58.833 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.093 14:09:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:59.354 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:59.354 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:59.354 true 00:10:59.354 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:10:59.354 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.737 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:00.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.737 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.737 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:11:00.737 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:00.997 true 00:11:00.997 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:11:00.997 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.937 14:09:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:01.937 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:01.937 14:09:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:11:01.937 14:09:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:02.198 true 00:11:02.198 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:11:02.198 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:02.198 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.459 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:11:02.460 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:02.721 true 00:11:02.721 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:11:02.721 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:02.721 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.721 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.721 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.981 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.981 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:11:02.982 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:03.242 true 00:11:03.242 14:09:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:11:03.242 14:09:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.183 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:04.183 14:09:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:04.183 14:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:11:04.183 14:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:04.443 true 00:11:04.443 14:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:11:04.443 14:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.443 14:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:04.703 14:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:11:04.703 14:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:04.964 true 00:11:04.964 14:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:11:04.964 14:09:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.224 14:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.224 14:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:11:05.224 14:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:05.484 true 00:11:05.484 14:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:11:05.485 14:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.745 14:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.745 14:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:11:05.745 14:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:06.005 true 00:11:06.005 14:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:11:06.005 14:09:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:07.389 14:09:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:07.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:07.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:07.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:07.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:07.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:07.389 14:09:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:11:07.389 14:09:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:07.389 true 00:11:07.389 14:09:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:11:07.389 14:09:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.330 14:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.598 14:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:11:08.598 14:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:08.598 true 00:11:08.598 14:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:11:08.598 14:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.861 14:09:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:09.121 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:11:09.121 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:09.121 true 00:11:09.381 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:11:09.381 14:09:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:10.321 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:10.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:10.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:10.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:10.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:10.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:10.581 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:10.581 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:11:10.581 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:10.581 true 00:11:10.841 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:11:10.841 14:09:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:11.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:11.672 14:09:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:11.672 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:11.672 14:09:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:11:11.672 14:09:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:11.933 true 00:11:11.933 14:09:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:11:11.933 14:09:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.193 14:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:12.193 14:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:11:12.193 14:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:11:12.454 true 00:11:12.454 14:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:11:12.454 14:09:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:13.840 14:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:13.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:13.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:13.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:13.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:13.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:13.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:13.840 14:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:11:13.840 14:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:11:13.840 true 00:11:14.100 14:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:11:14.100 14:09:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.671 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:14.931 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:11:14.931 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:11:15.192 true 00:11:15.192 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:11:15.192 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.452 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:15.452 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:11:15.452 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:11:15.713 true 00:11:15.713 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:11:15.714 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.656 Initializing NVMe Controllers 00:11:16.656 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:16.656 Controller IO queue size 128, less than required. 00:11:16.656 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:16.656 Controller IO queue size 128, less than required. 00:11:16.656 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:16.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:16.656 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:16.656 Initialization complete. Launching workers. 00:11:16.656 ======================================================== 00:11:16.656 Latency(us) 00:11:16.656 Device Information : IOPS MiB/s Average min max 00:11:16.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2042.60 1.00 37880.06 1456.48 1072880.28 00:11:16.656 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18038.33 8.81 7095.81 1140.75 341851.88 00:11:16.656 ======================================================== 00:11:16.656 Total : 20080.93 9.81 10227.13 1140.75 1072880.28 00:11:16.656 00:11:16.916 14:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:16.916 14:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:11:16.916 14:09:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:11:17.177 true 00:11:17.177 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3219748 00:11:17.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3219748) - No such process 00:11:17.177 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3219748 00:11:17.177 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.439 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:17.439 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:17.439 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:17.439 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:17.439 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:17.439 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:17.700 null0 00:11:17.700 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:17.700 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:17.700 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:18.016 null1 00:11:18.016 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:18.016 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:18.016 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:18.016 null2 00:11:18.016 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:18.016 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:18.016 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:18.301 null3 00:11:18.301 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:18.301 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:18.301 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:18.301 null4 00:11:18.560 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:18.560 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:18.560 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:18.560 null5 00:11:18.560 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:18.560 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:18.560 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:18.821 null6 00:11:18.821 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:18.821 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:18.821 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:19.082 null7 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3226810 3226811 3226813 3226815 3226817 3226819 3226821 3226823 00:11:19.082 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:19.083 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:19.083 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.083 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:19.083 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:19.083 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.344 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:19.344 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:19.344 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:19.344 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:19.344 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:19.344 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:19.344 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.344 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.344 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:19.344 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.344 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.344 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:19.344 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.344 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.344 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:19.344 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.344 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.344 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:19.344 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.344 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.344 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:19.344 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.344 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.344 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:19.606 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.606 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.606 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:19.606 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.606 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.606 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:19.606 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:19.606 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.606 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:19.606 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:19.606 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:19.606 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:19.606 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:19.606 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:19.606 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.606 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.606 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:19.606 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.606 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.606 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:19.869 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.869 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.869 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:19.869 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.869 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.869 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:19.869 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:19.869 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.869 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.869 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:19.869 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.869 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.869 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:19.869 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.869 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.869 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:19.869 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.869 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.869 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.869 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:19.869 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:19.869 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:19.869 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:19.869 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:19.869 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:20.131 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:20.131 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:20.131 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:20.131 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.131 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.131 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:20.131 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:20.131 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.131 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.131 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:20.131 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:20.131 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.131 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.131 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:20.131 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.131 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.131 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:20.131 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.131 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.131 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:20.131 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.131 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.131 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.131 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:20.131 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.131 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.131 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:20.393 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:20.393 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.393 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.393 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:20.393 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:20.393 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:20.393 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:20.393 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.393 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.393 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:20.393 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:20.393 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.393 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.393 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:20.393 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:20.393 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:20.654 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.654 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.654 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:20.654 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.654 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.654 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:20.654 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.654 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.654 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:20.654 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.654 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.654 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:20.654 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.654 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:20.654 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.654 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.654 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:20.654 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.654 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.654 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:20.654 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:20.654 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:20.916 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:20.916 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.916 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.916 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:20.916 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:20.916 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.916 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.916 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:20.916 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:20.916 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:20.916 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.916 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.916 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:20.916 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.916 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.916 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:20.916 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.916 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.916 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:20.916 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.916 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.916 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.916 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:20.916 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:20.916 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:20.916 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:21.178 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.178 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.178 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:21.178 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:21.178 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:21.178 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:21.178 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.178 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.178 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:21.178 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:21.178 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:21.178 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:21.178 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:21.178 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.178 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.178 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:21.441 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.441 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.441 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.441 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:21.441 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.441 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:21.441 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.441 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.441 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:21.441 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.441 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.441 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:21.441 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.441 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.441 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:21.441 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.441 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.441 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:21.441 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.442 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:21.442 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:21.442 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:21.442 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:21.442 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:21.442 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:21.442 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:21.442 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.442 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.442 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:21.703 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.703 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.703 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:21.703 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.703 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.703 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:21.703 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.703 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.703 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:21.703 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.703 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.703 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:21.703 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.703 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.703 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.703 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:21.703 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.703 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.703 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:21.703 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.703 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.703 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:21.703 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:21.964 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:21.964 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:21.964 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:21.964 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:21.964 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.964 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.964 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:21.964 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:21.964 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:21.964 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.964 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.964 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:21.964 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.964 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.964 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:21.964 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:21.964 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:21.964 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:22.226 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.226 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.226 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:22.226 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.226 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.226 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:22.227 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.227 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.227 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:22.227 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:22.227 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.227 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.227 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.227 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:22.227 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:22.227 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:22.227 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:22.227 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:22.227 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:22.227 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.227 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.227 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:22.227 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.227 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.227 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:22.488 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:22.488 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.488 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.488 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:22.488 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.488 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.488 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:22.488 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.488 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.488 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:22.488 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.488 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.489 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.489 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.489 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:22.489 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.489 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:22.489 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.489 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.489 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:22.489 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:22.751 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:22.751 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:22.751 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.751 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.751 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.751 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.751 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:22.751 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:22.751 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.751 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.751 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.751 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:22.751 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:22.751 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.012 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.012 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.012 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:23.012 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:23.012 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:23.012 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:23.012 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:23.012 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:11:23.012 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:23.012 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:11:23.012 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:23.012 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:23.012 rmmod nvme_tcp 00:11:23.012 rmmod nvme_fabrics 00:11:23.012 rmmod nvme_keyring 00:11:23.012 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:23.012 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:11:23.012 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:11:23.012 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3219091 ']' 00:11:23.012 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3219091 00:11:23.012 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3219091 ']' 00:11:23.012 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3219091 00:11:23.012 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:11:23.012 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.012 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3219091 00:11:23.013 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:23.013 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:23.013 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3219091' 00:11:23.013 killing process with pid 3219091 00:11:23.013 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3219091 00:11:23.013 14:09:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3219091 00:11:23.013 14:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:23.013 14:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:23.013 14:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:23.013 14:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:11:23.013 14:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:11:23.013 14:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:23.013 14:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:11:23.274 14:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:23.275 14:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:23.275 14:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.275 14:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.275 14:09:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.197 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:25.197 00:11:25.197 real 0m49.347s 00:11:25.197 user 3m11.843s 00:11:25.197 sys 0m16.026s 00:11:25.197 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:25.197 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:25.197 ************************************ 00:11:25.197 END TEST nvmf_ns_hotplug_stress 00:11:25.197 ************************************ 00:11:25.197 14:09:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:25.197 14:09:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:25.197 14:09:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.197 14:09:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:25.197 ************************************ 00:11:25.197 START TEST nvmf_delete_subsystem 00:11:25.197 ************************************ 00:11:25.197 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:25.459 * Looking for test storage... 00:11:25.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:25.459 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:25.459 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:25.459 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:25.459 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:25.459 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:25.459 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:25.459 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:25.459 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:25.459 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:25.459 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:25.459 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:25.459 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:25.459 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:25.459 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:25.459 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:25.459 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:25.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.460 --rc genhtml_branch_coverage=1 00:11:25.460 --rc genhtml_function_coverage=1 00:11:25.460 --rc genhtml_legend=1 00:11:25.460 --rc geninfo_all_blocks=1 00:11:25.460 --rc geninfo_unexecuted_blocks=1 00:11:25.460 00:11:25.460 ' 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:25.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.460 --rc genhtml_branch_coverage=1 00:11:25.460 --rc genhtml_function_coverage=1 00:11:25.460 --rc genhtml_legend=1 00:11:25.460 --rc geninfo_all_blocks=1 00:11:25.460 --rc geninfo_unexecuted_blocks=1 00:11:25.460 00:11:25.460 ' 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:25.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.460 --rc genhtml_branch_coverage=1 00:11:25.460 --rc genhtml_function_coverage=1 00:11:25.460 --rc genhtml_legend=1 00:11:25.460 --rc geninfo_all_blocks=1 00:11:25.460 --rc geninfo_unexecuted_blocks=1 00:11:25.460 00:11:25.460 ' 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:25.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:25.460 --rc genhtml_branch_coverage=1 00:11:25.460 --rc genhtml_function_coverage=1 00:11:25.460 --rc genhtml_legend=1 00:11:25.460 --rc geninfo_all_blocks=1 00:11:25.460 --rc geninfo_unexecuted_blocks=1 00:11:25.460 00:11:25.460 ' 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:25.460 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:25.460 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:25.461 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:25.461 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:25.461 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:25.461 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:25.461 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:25.461 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:25.461 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:25.461 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:25.461 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:25.461 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.461 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.461 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.461 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:25.461 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:25.461 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:25.461 14:09:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:33.607 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:33.607 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:33.607 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:33.607 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:33.607 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:33.607 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:33.607 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:33.607 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:33.607 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:33.607 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:11:33.607 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:33.607 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:11:33.607 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:33.607 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:11:33.607 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:33.607 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:33.608 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:33.608 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:33.608 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:33.608 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:33.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.718 ms 00:11:33.608 00:11:33.608 --- 10.0.0.2 ping statistics --- 00:11:33.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.608 rtt min/avg/max/mdev = 0.718/0.718/0.718/0.000 ms 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:33.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:11:33.608 00:11:33.608 --- 10.0.0.1 ping statistics --- 00:11:33.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.608 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:33.608 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:33.608 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:33.608 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:33.608 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:33.609 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:33.609 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3232088 00:11:33.609 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3232088 00:11:33.609 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:33.609 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3232088 ']' 00:11:33.609 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.609 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.609 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.609 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.609 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:33.609 [2024-11-25 14:09:38.077252] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:11:33.609 [2024-11-25 14:09:38.077318] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.609 [2024-11-25 14:09:38.180474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:33.609 [2024-11-25 14:09:38.232815] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.609 [2024-11-25 14:09:38.232867] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.609 [2024-11-25 14:09:38.232875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.609 [2024-11-25 14:09:38.232883] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.609 [2024-11-25 14:09:38.232889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.609 [2024-11-25 14:09:38.234472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.609 [2024-11-25 14:09:38.234476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.871 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.871 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:11:33.871 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:33.871 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:33.871 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:33.871 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.871 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:33.871 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.871 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:33.871 [2024-11-25 14:09:38.953793] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.871 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.871 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:33.871 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.871 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.132 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.132 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.132 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.132 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.132 [2024-11-25 14:09:38.978140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.132 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.132 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:34.132 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.132 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.132 NULL1 00:11:34.132 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.132 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:34.132 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.132 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.132 Delay0 00:11:34.132 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.132 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:34.132 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.132 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.132 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.132 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3232363 00:11:34.132 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:34.132 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:34.132 [2024-11-25 14:09:39.105080] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:36.045 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:36.045 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.045 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:36.307 Write completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Write completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Write completed with error (sct=0, sc=8) 00:11:36.307 Write completed with error (sct=0, sc=8) 00:11:36.307 Write completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Write completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Write completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Write completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Write completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 [2024-11-25 14:09:41.313913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21662c0 is same with the state(6) to be set 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Write completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Write completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Write completed with error (sct=0, sc=8) 00:11:36.307 Write completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Write completed with error (sct=0, sc=8) 00:11:36.307 Write completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Write completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Write completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Write completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Write completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Write completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Write completed with error (sct=0, sc=8) 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Write completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.307 Write completed with error (sct=0, sc=8) 00:11:36.307 starting I/O failed: -6 00:11:36.307 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 starting I/O failed: -6 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 starting I/O failed: -6 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 starting I/O failed: -6 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 starting I/O failed: -6 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 starting I/O failed: -6 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 [2024-11-25 14:09:41.314481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2166680 is same with the state(6) to be set 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 starting I/O failed: -6 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 starting I/O failed: -6 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 starting I/O failed: -6 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 starting I/O failed: -6 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 starting I/O failed: -6 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 starting I/O failed: -6 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 starting I/O failed: -6 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 starting I/O failed: -6 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 starting I/O failed: -6 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 starting I/O failed: -6 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 [2024-11-25 14:09:41.316715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f264c000c80 is same with the state(6) to be set 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 Write completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:36.308 Read completed with error (sct=0, sc=8) 00:11:37.252 [2024-11-25 14:09:42.287457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21679a0 is same with the state(6) to be set 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 [2024-11-25 14:09:42.317050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f264c00d060 is same with the state(6) to be set 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 [2024-11-25 14:09:42.318226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f264c00d800 is same with the state(6) to be set 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 [2024-11-25 14:09:42.318554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21664a0 is same with the state(6) to be set 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Write completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.252 Read completed with error (sct=0, sc=8) 00:11:37.253 Read completed with error (sct=0, sc=8) 00:11:37.253 Write completed with error (sct=0, sc=8) 00:11:37.253 Read completed with error (sct=0, sc=8) 00:11:37.253 Read completed with error (sct=0, sc=8) 00:11:37.253 Read completed with error (sct=0, sc=8) 00:11:37.253 Read completed with error (sct=0, sc=8) 00:11:37.253 Read completed with error (sct=0, sc=8) 00:11:37.253 Read completed with error (sct=0, sc=8) 00:11:37.253 Read completed with error (sct=0, sc=8) 00:11:37.253 Read completed with error (sct=0, sc=8) 00:11:37.253 Read completed with error (sct=0, sc=8) 00:11:37.253 Read completed with error (sct=0, sc=8) 00:11:37.253 Write completed with error (sct=0, sc=8) 00:11:37.253 Read completed with error (sct=0, sc=8) 00:11:37.253 Write completed with error (sct=0, sc=8) 00:11:37.253 Write completed with error (sct=0, sc=8) 00:11:37.253 Write completed with error (sct=0, sc=8) 00:11:37.253 Read completed with error (sct=0, sc=8) 00:11:37.253 Read completed with error (sct=0, sc=8) 00:11:37.253 Read completed with error (sct=0, sc=8) 00:11:37.253 Read completed with error (sct=0, sc=8) 00:11:37.253 Read completed with error (sct=0, sc=8) 00:11:37.253 Read completed with error (sct=0, sc=8) 00:11:37.253 Read completed with error (sct=0, sc=8) 00:11:37.253 Read completed with error (sct=0, sc=8) 00:11:37.253 [2024-11-25 14:09:42.319876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2166860 is same with the state(6) to be set 00:11:37.253 Initializing NVMe Controllers 00:11:37.253 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:37.253 Controller IO queue size 128, less than required. 00:11:37.253 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:37.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:37.253 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:37.253 Initialization complete. Launching workers. 00:11:37.253 ======================================================== 00:11:37.253 Latency(us) 00:11:37.253 Device Information : IOPS MiB/s Average min max 00:11:37.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 189.42 0.09 894141.56 568.03 1009970.11 00:11:37.253 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.61 0.08 979928.07 297.98 2002546.74 00:11:37.253 ======================================================== 00:11:37.253 Total : 347.03 0.17 933101.91 297.98 2002546.74 00:11:37.253 00:11:37.253 [2024-11-25 14:09:42.320296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21679a0 (9): Bad file descriptor 00:11:37.253 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:37.253 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.253 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:37.253 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3232363 00:11:37.253 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3232363 00:11:37.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3232363) - No such process 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3232363 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3232363 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3232363 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:37.824 [2024-11-25 14:09:42.851934] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3233048 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3233048 00:11:37.824 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:38.084 [2024-11-25 14:09:42.957741] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:38.344 14:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:38.344 14:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3233048 00:11:38.344 14:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:38.914 14:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:38.914 14:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3233048 00:11:38.914 14:09:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:39.543 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:39.543 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3233048 00:11:39.544 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:39.804 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:39.804 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3233048 00:11:39.804 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:40.374 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:40.374 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3233048 00:11:40.374 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:40.944 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:40.944 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3233048 00:11:40.944 14:09:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:41.204 Initializing NVMe Controllers 00:11:41.204 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:41.204 Controller IO queue size 128, less than required. 00:11:41.204 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:41.204 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:41.204 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:41.204 Initialization complete. Launching workers. 00:11:41.204 ======================================================== 00:11:41.204 Latency(us) 00:11:41.204 Device Information : IOPS MiB/s Average min max 00:11:41.204 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002045.11 1000196.03 1004959.86 00:11:41.204 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003537.41 1000397.50 1041080.43 00:11:41.204 ======================================================== 00:11:41.204 Total : 256.00 0.12 1002791.26 1000196.03 1041080.43 00:11:41.204 00:11:41.465 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:41.465 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3233048 00:11:41.465 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3233048) - No such process 00:11:41.465 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3233048 00:11:41.465 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:41.465 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:41.465 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:41.465 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:11:41.465 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:41.465 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:11:41.465 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:41.465 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:41.465 rmmod nvme_tcp 00:11:41.465 rmmod nvme_fabrics 00:11:41.465 rmmod nvme_keyring 00:11:41.465 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:41.465 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:11:41.465 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:11:41.465 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3232088 ']' 00:11:41.465 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3232088 00:11:41.465 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3232088 ']' 00:11:41.465 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3232088 00:11:41.465 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:11:41.465 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.465 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3232088 00:11:41.465 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.465 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.465 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3232088' 00:11:41.465 killing process with pid 3232088 00:11:41.465 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3232088 00:11:41.465 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3232088 00:11:41.727 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:41.727 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:41.727 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:41.727 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:11:41.727 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:11:41.727 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:41.727 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:41.727 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:41.727 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:41.727 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.727 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.727 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.645 14:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:43.906 00:11:43.906 real 0m18.473s 00:11:43.906 user 0m31.143s 00:11:43.906 sys 0m6.842s 00:11:43.906 14:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.906 14:09:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:43.906 ************************************ 00:11:43.906 END TEST nvmf_delete_subsystem 00:11:43.906 ************************************ 00:11:43.906 14:09:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:43.906 14:09:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:43.906 14:09:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.906 14:09:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:43.906 ************************************ 00:11:43.906 START TEST nvmf_host_management 00:11:43.906 ************************************ 00:11:43.906 14:09:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:43.906 * Looking for test storage... 00:11:43.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.906 14:09:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:43.906 14:09:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:11:43.906 14:09:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:44.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.168 --rc genhtml_branch_coverage=1 00:11:44.168 --rc genhtml_function_coverage=1 00:11:44.168 --rc genhtml_legend=1 00:11:44.168 --rc geninfo_all_blocks=1 00:11:44.168 --rc geninfo_unexecuted_blocks=1 00:11:44.168 00:11:44.168 ' 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:44.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.168 --rc genhtml_branch_coverage=1 00:11:44.168 --rc genhtml_function_coverage=1 00:11:44.168 --rc genhtml_legend=1 00:11:44.168 --rc geninfo_all_blocks=1 00:11:44.168 --rc geninfo_unexecuted_blocks=1 00:11:44.168 00:11:44.168 ' 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:44.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.168 --rc genhtml_branch_coverage=1 00:11:44.168 --rc genhtml_function_coverage=1 00:11:44.168 --rc genhtml_legend=1 00:11:44.168 --rc geninfo_all_blocks=1 00:11:44.168 --rc geninfo_unexecuted_blocks=1 00:11:44.168 00:11:44.168 ' 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:44.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.168 --rc genhtml_branch_coverage=1 00:11:44.168 --rc genhtml_function_coverage=1 00:11:44.168 --rc genhtml_legend=1 00:11:44.168 --rc geninfo_all_blocks=1 00:11:44.168 --rc geninfo_unexecuted_blocks=1 00:11:44.168 00:11:44.168 ' 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.168 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:44.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:11:44.169 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:52.309 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:52.309 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:52.309 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:52.309 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:52.309 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:52.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:11:52.310 00:11:52.310 --- 10.0.0.2 ping statistics --- 00:11:52.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.310 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:52.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:11:52.310 00:11:52.310 --- 10.0.0.1 ping statistics --- 00:11:52.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.310 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3238064 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3238064 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3238064 ']' 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.310 14:09:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:52.310 [2024-11-25 14:09:56.631316] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:11:52.310 [2024-11-25 14:09:56.631383] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.310 [2024-11-25 14:09:56.733341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.310 [2024-11-25 14:09:56.786635] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.310 [2024-11-25 14:09:56.786682] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.310 [2024-11-25 14:09:56.786690] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.310 [2024-11-25 14:09:56.786697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.310 [2024-11-25 14:09:56.786703] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.310 [2024-11-25 14:09:56.788715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.310 [2024-11-25 14:09:56.788881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.310 [2024-11-25 14:09:56.789271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:52.310 [2024-11-25 14:09:56.789392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:52.572 [2024-11-25 14:09:57.505409] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:52.572 Malloc0 00:11:52.572 [2024-11-25 14:09:57.585850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3238435 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3238435 /var/tmp/bdevperf.sock 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3238435 ']' 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:52.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:52.572 { 00:11:52.572 "params": { 00:11:52.572 "name": "Nvme$subsystem", 00:11:52.572 "trtype": "$TEST_TRANSPORT", 00:11:52.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:52.572 "adrfam": "ipv4", 00:11:52.572 "trsvcid": "$NVMF_PORT", 00:11:52.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:52.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:52.572 "hdgst": ${hdgst:-false}, 00:11:52.572 "ddgst": ${ddgst:-false} 00:11:52.572 }, 00:11:52.572 "method": "bdev_nvme_attach_controller" 00:11:52.572 } 00:11:52.572 EOF 00:11:52.572 )") 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:11:52.572 14:09:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:52.572 "params": { 00:11:52.572 "name": "Nvme0", 00:11:52.572 "trtype": "tcp", 00:11:52.572 "traddr": "10.0.0.2", 00:11:52.572 "adrfam": "ipv4", 00:11:52.572 "trsvcid": "4420", 00:11:52.572 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:52.572 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:52.572 "hdgst": false, 00:11:52.572 "ddgst": false 00:11:52.572 }, 00:11:52.572 "method": "bdev_nvme_attach_controller" 00:11:52.572 }' 00:11:52.833 [2024-11-25 14:09:57.697296] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:11:52.833 [2024-11-25 14:09:57.697369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3238435 ] 00:11:52.833 [2024-11-25 14:09:57.791976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.833 [2024-11-25 14:09:57.845671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.094 Running I/O for 10 seconds... 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.670 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:53.670 [2024-11-25 14:09:58.608410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.670 [2024-11-25 14:09:58.608505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.670 [2024-11-25 14:09:58.608516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.670 [2024-11-25 14:09:58.608525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.608859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe831b0 is same with the state(6) to be set 00:11:53.671 [2024-11-25 14:09:58.609096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.671 [2024-11-25 14:09:58.609153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.671 [2024-11-25 14:09:58.609189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.671 [2024-11-25 14:09:58.609199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.671 [2024-11-25 14:09:58.609210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.671 [2024-11-25 14:09:58.609218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.671 [2024-11-25 14:09:58.609228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.671 [2024-11-25 14:09:58.609236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.671 [2024-11-25 14:09:58.609247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.671 [2024-11-25 14:09:58.609256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.671 [2024-11-25 14:09:58.609266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.671 [2024-11-25 14:09:58.609274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.671 [2024-11-25 14:09:58.609292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.671 [2024-11-25 14:09:58.609300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.671 [2024-11-25 14:09:58.609311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.671 [2024-11-25 14:09:58.609320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.671 [2024-11-25 14:09:58.609330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.671 [2024-11-25 14:09:58.609338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.671 [2024-11-25 14:09:58.609349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.671 [2024-11-25 14:09:58.609360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.671 [2024-11-25 14:09:58.609375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.671 [2024-11-25 14:09:58.609383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.671 [2024-11-25 14:09:58.609393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.671 [2024-11-25 14:09:58.609401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.671 [2024-11-25 14:09:58.609410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.671 [2024-11-25 14:09:58.609418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.671 [2024-11-25 14:09:58.609428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.671 [2024-11-25 14:09:58.609435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.671 [2024-11-25 14:09:58.609444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.671 [2024-11-25 14:09:58.609452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.671 [2024-11-25 14:09:58.609462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.671 [2024-11-25 14:09:58.609469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.671 [2024-11-25 14:09:58.609479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.671 [2024-11-25 14:09:58.609487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.671 [2024-11-25 14:09:58.609496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.671 [2024-11-25 14:09:58.609504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.671 [2024-11-25 14:09:58.609514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.609984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.609995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.610002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.610012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.610019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.610029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.610036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.610046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.610054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.610064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.610072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.610084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.610096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.610108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.610118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.610130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.610141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.610154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.610171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.610184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.610194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.610208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.610217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.672 [2024-11-25 14:09:58.610228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.672 [2024-11-25 14:09:58.610239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.673 [2024-11-25 14:09:58.610250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.673 [2024-11-25 14:09:58.610259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.673 [2024-11-25 14:09:58.610270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.673 [2024-11-25 14:09:58.610277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.673 [2024-11-25 14:09:58.610286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.673 [2024-11-25 14:09:58.610294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.673 [2024-11-25 14:09:58.610304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.673 [2024-11-25 14:09:58.610312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.673 [2024-11-25 14:09:58.610322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.673 [2024-11-25 14:09:58.610330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.673 [2024-11-25 14:09:58.610339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.673 [2024-11-25 14:09:58.610346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.673 [2024-11-25 14:09:58.610355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:53.673 [2024-11-25 14:09:58.610363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:53.673 [2024-11-25 14:09:58.610372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc87290 is same with the state(6) to be set 00:11:53.673 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.673 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:53.673 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.673 [2024-11-25 14:09:58.611684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:11:53.673 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:53.673 task offset: 81920 on job bdev=Nvme0n1 fails 00:11:53.673 00:11:53.673 Latency(us) 00:11:53.673 [2024-11-25T13:09:58.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.673 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:53.673 Job: Nvme0n1 ended in about 0.48 seconds with error 00:11:53.673 Verification LBA range: start 0x0 length 0x400 00:11:53.673 Nvme0n1 : 0.48 1341.22 83.83 134.12 0.00 42190.80 4915.20 36263.25 00:11:53.673 [2024-11-25T13:09:58.763Z] =================================================================================================================== 00:11:53.673 [2024-11-25T13:09:58.763Z] Total : 1341.22 83.83 134.12 0.00 42190.80 4915.20 36263.25 00:11:53.673 [2024-11-25 14:09:58.613955] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:53.673 [2024-11-25 14:09:58.613997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6e080 (9): Bad file descriptor 00:11:53.673 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.673 14:09:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:53.673 [2024-11-25 14:09:58.623395] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:11:54.615 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3238435 00:11:54.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3238435) - No such process 00:11:54.615 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:54.615 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:54.615 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:54.615 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:54.615 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:11:54.615 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:11:54.615 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:54.615 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:54.615 { 00:11:54.615 "params": { 00:11:54.615 "name": "Nvme$subsystem", 00:11:54.615 "trtype": "$TEST_TRANSPORT", 00:11:54.615 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:54.615 "adrfam": "ipv4", 00:11:54.615 "trsvcid": "$NVMF_PORT", 00:11:54.615 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:54.615 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:54.615 "hdgst": ${hdgst:-false}, 00:11:54.615 "ddgst": ${ddgst:-false} 00:11:54.615 }, 00:11:54.615 "method": "bdev_nvme_attach_controller" 00:11:54.615 } 00:11:54.615 EOF 00:11:54.615 )") 00:11:54.615 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:11:54.615 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:11:54.615 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:11:54.615 14:09:59 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:54.615 "params": { 00:11:54.615 "name": "Nvme0", 00:11:54.615 "trtype": "tcp", 00:11:54.615 "traddr": "10.0.0.2", 00:11:54.615 "adrfam": "ipv4", 00:11:54.615 "trsvcid": "4420", 00:11:54.615 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:54.615 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:54.615 "hdgst": false, 00:11:54.615 "ddgst": false 00:11:54.615 }, 00:11:54.615 "method": "bdev_nvme_attach_controller" 00:11:54.615 }' 00:11:54.615 [2024-11-25 14:09:59.685999] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:11:54.615 [2024-11-25 14:09:59.686073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3238790 ] 00:11:54.875 [2024-11-25 14:09:59.778810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.875 [2024-11-25 14:09:59.814103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.135 Running I/O for 1 seconds... 00:11:56.076 1536.00 IOPS, 96.00 MiB/s 00:11:56.076 Latency(us) 00:11:56.076 [2024-11-25T13:10:01.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.076 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:56.076 Verification LBA range: start 0x0 length 0x400 00:11:56.076 Nvme0n1 : 1.01 1590.28 99.39 0.00 0.00 39547.67 7318.19 32549.55 00:11:56.076 [2024-11-25T13:10:01.166Z] =================================================================================================================== 00:11:56.076 [2024-11-25T13:10:01.166Z] Total : 1590.28 99.39 0.00 0.00 39547.67 7318.19 32549.55 00:11:56.336 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:56.336 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:56.336 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:56.336 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:56.336 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:56.336 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:56.336 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:11:56.336 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:56.336 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:11:56.336 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:56.336 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:56.336 rmmod nvme_tcp 00:11:56.336 rmmod nvme_fabrics 00:11:56.336 rmmod nvme_keyring 00:11:56.336 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:56.336 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:11:56.336 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:11:56.336 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3238064 ']' 00:11:56.336 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3238064 00:11:56.336 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3238064 ']' 00:11:56.336 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3238064 00:11:56.336 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:11:56.336 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:56.336 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3238064 00:11:56.336 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:56.336 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:56.336 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3238064' 00:11:56.336 killing process with pid 3238064 00:11:56.336 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3238064 00:11:56.336 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3238064 00:11:56.597 [2024-11-25 14:10:01.474412] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:56.597 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:56.597 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:56.597 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:56.597 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:11:56.597 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:11:56.597 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:56.597 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:11:56.597 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:56.597 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:56.597 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.597 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.597 14:10:01 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.524 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:58.524 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:58.524 00:11:58.524 real 0m14.757s 00:11:58.524 user 0m23.618s 00:11:58.524 sys 0m6.802s 00:11:58.524 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:58.524 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:58.524 ************************************ 00:11:58.524 END TEST nvmf_host_management 00:11:58.524 ************************************ 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:58.785 ************************************ 00:11:58.785 START TEST nvmf_lvol 00:11:58.785 ************************************ 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:58.785 * Looking for test storage... 00:11:58.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:58.785 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:58.786 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:11:58.786 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:58.786 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:58.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.786 --rc genhtml_branch_coverage=1 00:11:58.786 --rc genhtml_function_coverage=1 00:11:58.786 --rc genhtml_legend=1 00:11:58.786 --rc geninfo_all_blocks=1 00:11:58.786 --rc geninfo_unexecuted_blocks=1 00:11:58.786 00:11:58.786 ' 00:11:58.786 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:58.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.786 --rc genhtml_branch_coverage=1 00:11:58.786 --rc genhtml_function_coverage=1 00:11:58.786 --rc genhtml_legend=1 00:11:58.786 --rc geninfo_all_blocks=1 00:11:58.786 --rc geninfo_unexecuted_blocks=1 00:11:58.786 00:11:58.786 ' 00:11:58.786 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:58.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.786 --rc genhtml_branch_coverage=1 00:11:58.786 --rc genhtml_function_coverage=1 00:11:58.786 --rc genhtml_legend=1 00:11:58.786 --rc geninfo_all_blocks=1 00:11:58.786 --rc geninfo_unexecuted_blocks=1 00:11:58.786 00:11:58.786 ' 00:11:58.786 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:58.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:58.786 --rc genhtml_branch_coverage=1 00:11:58.786 --rc genhtml_function_coverage=1 00:11:58.786 --rc genhtml_legend=1 00:11:58.786 --rc geninfo_all_blocks=1 00:11:58.786 --rc geninfo_unexecuted_blocks=1 00:11:58.786 00:11:58.786 ' 00:11:58.786 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:58.786 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:59.047 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.047 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.047 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.047 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.047 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.047 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.047 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.047 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.047 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.047 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.047 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:59.047 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:59.047 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.047 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.047 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:59.047 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.047 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:59.047 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:11:59.047 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.047 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.047 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:59.048 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:11:59.048 14:10:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:07.287 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:07.288 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:07.288 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:07.288 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:07.288 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:07.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:12:07.288 00:12:07.288 --- 10.0.0.2 ping statistics --- 00:12:07.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.288 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:12:07.288 00:12:07.288 --- 10.0.0.1 ping statistics --- 00:12:07.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.288 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3243417 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3243417 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3243417 ']' 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.288 14:10:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:07.288 [2024-11-25 14:10:11.476317] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:12:07.288 [2024-11-25 14:10:11.476385] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.288 [2024-11-25 14:10:11.578816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:07.288 [2024-11-25 14:10:11.632391] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.288 [2024-11-25 14:10:11.632449] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.288 [2024-11-25 14:10:11.632457] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.288 [2024-11-25 14:10:11.632465] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.288 [2024-11-25 14:10:11.632472] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.288 [2024-11-25 14:10:11.634268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.288 [2024-11-25 14:10:11.634435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.289 [2024-11-25 14:10:11.634435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.289 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.289 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:12:07.289 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:07.289 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:07.289 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:07.289 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.289 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:07.550 [2024-11-25 14:10:12.511044] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:07.550 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:07.811 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:07.811 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:08.072 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:08.072 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:08.333 14:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:08.333 14:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c9e612e5-c32b-415c-b971-23e22ff11491 00:12:08.333 14:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c9e612e5-c32b-415c-b971-23e22ff11491 lvol 20 00:12:08.594 14:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=50c9fbf7-5ac0-4295-a47a-e53941aaff7b 00:12:08.594 14:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:08.855 14:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 50c9fbf7-5ac0-4295-a47a-e53941aaff7b 00:12:09.115 14:10:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:09.115 [2024-11-25 14:10:14.143838] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.115 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:09.376 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3243881 00:12:09.376 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:09.376 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:10.317 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 50c9fbf7-5ac0-4295-a47a-e53941aaff7b MY_SNAPSHOT 00:12:10.579 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0f29c479-9b9a-44d2-badc-4b1ed5f69925 00:12:10.579 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 50c9fbf7-5ac0-4295-a47a-e53941aaff7b 30 00:12:10.841 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0f29c479-9b9a-44d2-badc-4b1ed5f69925 MY_CLONE 00:12:11.103 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=7e804fd5-e7f7-4029-bed1-c7d3168521fc 00:12:11.103 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 7e804fd5-e7f7-4029-bed1-c7d3168521fc 00:12:11.363 14:10:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3243881 00:12:21.383 Initializing NVMe Controllers 00:12:21.383 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:21.383 Controller IO queue size 128, less than required. 00:12:21.383 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:21.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:21.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:21.383 Initialization complete. Launching workers. 00:12:21.383 ======================================================== 00:12:21.383 Latency(us) 00:12:21.383 Device Information : IOPS MiB/s Average min max 00:12:21.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16158.60 63.12 7924.96 1534.91 55053.23 00:12:21.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17422.50 68.06 7346.26 846.53 51969.21 00:12:21.383 ======================================================== 00:12:21.383 Total : 33581.10 131.18 7624.72 846.53 55053.23 00:12:21.383 00:12:21.383 14:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:21.383 14:10:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 50c9fbf7-5ac0-4295-a47a-e53941aaff7b 00:12:21.383 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c9e612e5-c32b-415c-b971-23e22ff11491 00:12:21.383 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:21.383 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:21.383 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:21.383 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:21.383 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:12:21.383 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:21.383 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:12:21.383 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:21.383 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:21.383 rmmod nvme_tcp 00:12:21.383 rmmod nvme_fabrics 00:12:21.383 rmmod nvme_keyring 00:12:21.383 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:21.383 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:12:21.383 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:12:21.383 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3243417 ']' 00:12:21.383 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3243417 00:12:21.383 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3243417 ']' 00:12:21.383 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3243417 00:12:21.384 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:12:21.384 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:21.384 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3243417 00:12:21.384 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:21.384 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:21.384 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3243417' 00:12:21.384 killing process with pid 3243417 00:12:21.384 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3243417 00:12:21.384 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3243417 00:12:21.384 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:21.384 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:21.384 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:21.384 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:12:21.384 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:12:21.384 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:21.384 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:12:21.384 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:21.384 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:21.384 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.384 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.384 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:22.769 00:12:22.769 real 0m23.863s 00:12:22.769 user 1m4.439s 00:12:22.769 sys 0m8.636s 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:22.769 ************************************ 00:12:22.769 END TEST nvmf_lvol 00:12:22.769 ************************************ 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:22.769 ************************************ 00:12:22.769 START TEST nvmf_lvs_grow 00:12:22.769 ************************************ 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:22.769 * Looking for test storage... 00:12:22.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:12:22.769 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:22.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.770 --rc genhtml_branch_coverage=1 00:12:22.770 --rc genhtml_function_coverage=1 00:12:22.770 --rc genhtml_legend=1 00:12:22.770 --rc geninfo_all_blocks=1 00:12:22.770 --rc geninfo_unexecuted_blocks=1 00:12:22.770 00:12:22.770 ' 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:22.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.770 --rc genhtml_branch_coverage=1 00:12:22.770 --rc genhtml_function_coverage=1 00:12:22.770 --rc genhtml_legend=1 00:12:22.770 --rc geninfo_all_blocks=1 00:12:22.770 --rc geninfo_unexecuted_blocks=1 00:12:22.770 00:12:22.770 ' 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:22.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.770 --rc genhtml_branch_coverage=1 00:12:22.770 --rc genhtml_function_coverage=1 00:12:22.770 --rc genhtml_legend=1 00:12:22.770 --rc geninfo_all_blocks=1 00:12:22.770 --rc geninfo_unexecuted_blocks=1 00:12:22.770 00:12:22.770 ' 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:22.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:22.770 --rc genhtml_branch_coverage=1 00:12:22.770 --rc genhtml_function_coverage=1 00:12:22.770 --rc genhtml_legend=1 00:12:22.770 --rc geninfo_all_blocks=1 00:12:22.770 --rc geninfo_unexecuted_blocks=1 00:12:22.770 00:12:22.770 ' 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:22.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:22.770 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:22.771 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:22.771 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:22.771 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:22.771 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:22.771 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:22.771 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.771 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:22.771 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:22.771 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:22.771 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.771 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:22.771 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.771 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:22.771 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:22.771 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:12:22.771 14:10:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:30.914 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:30.914 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:12:30.914 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:30.914 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:30.914 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:30.914 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:30.914 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:30.914 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:12:30.914 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:30.914 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:12:30.914 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:12:30.914 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:12:30.914 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:30.915 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:30.915 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:30.915 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:30.915 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:30.915 14:10:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:30.915 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:30.915 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:30.915 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:30.915 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:30.915 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:30.915 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:30.915 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:30.915 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:30.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:30.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:12:30.915 00:12:30.915 --- 10.0.0.2 ping statistics --- 00:12:30.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.915 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:12:30.915 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:30.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:30.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:12:30.915 00:12:30.915 --- 10.0.0.1 ping statistics --- 00:12:30.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.915 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:12:30.915 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:30.915 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:12:30.915 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:30.915 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:30.915 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:30.915 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:30.915 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:30.915 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:30.915 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:30.915 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:30.916 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:30.916 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:30.916 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:30.916 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3250392 00:12:30.916 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3250392 00:12:30.916 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:30.916 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3250392 ']' 00:12:30.916 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.916 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:30.916 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.916 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:30.916 14:10:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:30.916 [2024-11-25 14:10:35.368380] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:12:30.916 [2024-11-25 14:10:35.368449] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.916 [2024-11-25 14:10:35.468674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.916 [2024-11-25 14:10:35.520494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.916 [2024-11-25 14:10:35.520545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.916 [2024-11-25 14:10:35.520553] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.916 [2024-11-25 14:10:35.520561] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.916 [2024-11-25 14:10:35.520567] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.916 [2024-11-25 14:10:35.521324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.179 14:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:31.179 14:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:12:31.179 14:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:31.179 14:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:31.179 14:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:31.179 14:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.179 14:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:31.440 [2024-11-25 14:10:36.380268] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.440 14:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:31.440 14:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:31.440 14:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.440 14:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:31.440 ************************************ 00:12:31.440 START TEST lvs_grow_clean 00:12:31.440 ************************************ 00:12:31.440 14:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:12:31.440 14:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:31.440 14:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:31.440 14:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:31.440 14:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:31.440 14:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:31.440 14:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:31.440 14:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:31.440 14:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:31.440 14:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:31.702 14:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:31.702 14:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:31.963 14:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ae34fc26-f4d6-442a-9d00-0bf803af2d32 00:12:31.963 14:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae34fc26-f4d6-442a-9d00-0bf803af2d32 00:12:31.963 14:10:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:32.223 14:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:32.224 14:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:32.224 14:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ae34fc26-f4d6-442a-9d00-0bf803af2d32 lvol 150 00:12:32.224 14:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a42d0b89-93a8-45bd-938a-a66b516ad7dd 00:12:32.224 14:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:32.224 14:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:32.484 [2024-11-25 14:10:37.444853] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:32.484 [2024-11-25 14:10:37.444926] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:32.484 true 00:12:32.484 14:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae34fc26-f4d6-442a-9d00-0bf803af2d32 00:12:32.485 14:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:32.746 14:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:32.746 14:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:32.746 14:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a42d0b89-93a8-45bd-938a-a66b516ad7dd 00:12:33.007 14:10:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:33.269 [2024-11-25 14:10:38.139065] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.269 14:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:33.269 14:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:33.269 14:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3250941 00:12:33.269 14:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:33.269 14:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3250941 /var/tmp/bdevperf.sock 00:12:33.269 14:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3250941 ']' 00:12:33.269 14:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:33.269 14:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:33.269 14:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:33.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:33.269 14:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:33.269 14:10:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:33.531 [2024-11-25 14:10:38.378660] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:12:33.531 [2024-11-25 14:10:38.378730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3250941 ] 00:12:33.531 [2024-11-25 14:10:38.476502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.532 [2024-11-25 14:10:38.529282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.475 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:34.475 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:12:34.475 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:34.736 Nvme0n1 00:12:34.737 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:34.737 [ 00:12:34.737 { 00:12:34.737 "name": "Nvme0n1", 00:12:34.737 "aliases": [ 00:12:34.737 "a42d0b89-93a8-45bd-938a-a66b516ad7dd" 00:12:34.737 ], 00:12:34.737 "product_name": "NVMe disk", 00:12:34.737 "block_size": 4096, 00:12:34.737 "num_blocks": 38912, 00:12:34.737 "uuid": "a42d0b89-93a8-45bd-938a-a66b516ad7dd", 00:12:34.737 "numa_id": 0, 00:12:34.737 "assigned_rate_limits": { 00:12:34.737 "rw_ios_per_sec": 0, 00:12:34.737 "rw_mbytes_per_sec": 0, 00:12:34.737 "r_mbytes_per_sec": 0, 00:12:34.737 "w_mbytes_per_sec": 0 00:12:34.737 }, 00:12:34.737 "claimed": false, 00:12:34.737 "zoned": false, 00:12:34.737 "supported_io_types": { 00:12:34.737 "read": true, 00:12:34.737 "write": true, 00:12:34.737 "unmap": true, 00:12:34.737 "flush": true, 00:12:34.737 "reset": true, 00:12:34.737 "nvme_admin": true, 00:12:34.737 "nvme_io": true, 00:12:34.737 "nvme_io_md": false, 00:12:34.737 "write_zeroes": true, 00:12:34.737 "zcopy": false, 00:12:34.737 "get_zone_info": false, 00:12:34.737 "zone_management": false, 00:12:34.737 "zone_append": false, 00:12:34.737 "compare": true, 00:12:34.737 "compare_and_write": true, 00:12:34.737 "abort": true, 00:12:34.737 "seek_hole": false, 00:12:34.737 "seek_data": false, 00:12:34.737 "copy": true, 00:12:34.737 "nvme_iov_md": false 00:12:34.737 }, 00:12:34.737 "memory_domains": [ 00:12:34.737 { 00:12:34.737 "dma_device_id": "system", 00:12:34.737 "dma_device_type": 1 00:12:34.737 } 00:12:34.737 ], 00:12:34.737 "driver_specific": { 00:12:34.737 "nvme": [ 00:12:34.737 { 00:12:34.737 "trid": { 00:12:34.737 "trtype": "TCP", 00:12:34.737 "adrfam": "IPv4", 00:12:34.737 "traddr": "10.0.0.2", 00:12:34.737 "trsvcid": "4420", 00:12:34.737 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:34.737 }, 00:12:34.737 "ctrlr_data": { 00:12:34.737 "cntlid": 1, 00:12:34.737 "vendor_id": "0x8086", 00:12:34.737 "model_number": "SPDK bdev Controller", 00:12:34.737 "serial_number": "SPDK0", 00:12:34.737 "firmware_revision": "25.01", 00:12:34.737 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:34.737 "oacs": { 00:12:34.737 "security": 0, 00:12:34.737 "format": 0, 00:12:34.737 "firmware": 0, 00:12:34.737 "ns_manage": 0 00:12:34.737 }, 00:12:34.737 "multi_ctrlr": true, 00:12:34.737 "ana_reporting": false 00:12:34.737 }, 00:12:34.737 "vs": { 00:12:34.737 "nvme_version": "1.3" 00:12:34.737 }, 00:12:34.737 "ns_data": { 00:12:34.737 "id": 1, 00:12:34.737 "can_share": true 00:12:34.737 } 00:12:34.737 } 00:12:34.737 ], 00:12:34.737 "mp_policy": "active_passive" 00:12:34.737 } 00:12:34.737 } 00:12:34.737 ] 00:12:34.737 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3251283 00:12:34.737 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:34.737 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:34.997 Running I/O for 10 seconds... 00:12:35.942 Latency(us) 00:12:35.942 [2024-11-25T13:10:41.032Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.942 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:35.942 Nvme0n1 : 1.00 24857.00 97.10 0.00 0.00 0.00 0.00 0.00 00:12:35.942 [2024-11-25T13:10:41.032Z] =================================================================================================================== 00:12:35.942 [2024-11-25T13:10:41.033Z] Total : 24857.00 97.10 0.00 0.00 0.00 0.00 0.00 00:12:35.943 00:12:36.885 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ae34fc26-f4d6-442a-9d00-0bf803af2d32 00:12:36.885 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:36.885 Nvme0n1 : 2.00 25095.00 98.03 0.00 0.00 0.00 0.00 0.00 00:12:36.885 [2024-11-25T13:10:41.975Z] =================================================================================================================== 00:12:36.885 [2024-11-25T13:10:41.975Z] Total : 25095.00 98.03 0.00 0.00 0.00 0.00 0.00 00:12:36.885 00:12:36.885 true 00:12:37.148 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae34fc26-f4d6-442a-9d00-0bf803af2d32 00:12:37.148 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:37.148 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:37.148 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:37.148 14:10:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3251283 00:12:38.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:38.090 Nvme0n1 : 3.00 25208.67 98.47 0.00 0.00 0.00 0.00 0.00 00:12:38.090 [2024-11-25T13:10:43.180Z] =================================================================================================================== 00:12:38.090 [2024-11-25T13:10:43.180Z] Total : 25208.67 98.47 0.00 0.00 0.00 0.00 0.00 00:12:38.090 00:12:39.033 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:39.033 Nvme0n1 : 4.00 25269.00 98.71 0.00 0.00 0.00 0.00 0.00 00:12:39.033 [2024-11-25T13:10:44.123Z] =================================================================================================================== 00:12:39.033 [2024-11-25T13:10:44.123Z] Total : 25269.00 98.71 0.00 0.00 0.00 0.00 0.00 00:12:39.033 00:12:39.975 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:39.975 Nvme0n1 : 5.00 25330.60 98.95 0.00 0.00 0.00 0.00 0.00 00:12:39.975 [2024-11-25T13:10:45.065Z] =================================================================================================================== 00:12:39.975 [2024-11-25T13:10:45.065Z] Total : 25330.60 98.95 0.00 0.00 0.00 0.00 0.00 00:12:39.975 00:12:40.918 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:40.918 Nvme0n1 : 6.00 25363.33 99.08 0.00 0.00 0.00 0.00 0.00 00:12:40.918 [2024-11-25T13:10:46.008Z] =================================================================================================================== 00:12:40.918 [2024-11-25T13:10:46.008Z] Total : 25363.33 99.08 0.00 0.00 0.00 0.00 0.00 00:12:40.918 00:12:41.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:41.861 Nvme0n1 : 7.00 25396.29 99.20 0.00 0.00 0.00 0.00 0.00 00:12:41.861 [2024-11-25T13:10:46.951Z] =================================================================================================================== 00:12:41.861 [2024-11-25T13:10:46.951Z] Total : 25396.29 99.20 0.00 0.00 0.00 0.00 0.00 00:12:41.861 00:12:42.805 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:42.805 Nvme0n1 : 8.00 25419.75 99.30 0.00 0.00 0.00 0.00 0.00 00:12:42.805 [2024-11-25T13:10:47.895Z] =================================================================================================================== 00:12:42.805 [2024-11-25T13:10:47.895Z] Total : 25419.75 99.30 0.00 0.00 0.00 0.00 0.00 00:12:42.805 00:12:44.190 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:44.190 Nvme0n1 : 9.00 25436.44 99.36 0.00 0.00 0.00 0.00 0.00 00:12:44.190 [2024-11-25T13:10:49.280Z] =================================================================================================================== 00:12:44.190 [2024-11-25T13:10:49.280Z] Total : 25436.44 99.36 0.00 0.00 0.00 0.00 0.00 00:12:44.190 00:12:45.134 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:45.134 Nvme0n1 : 10.00 25451.00 99.42 0.00 0.00 0.00 0.00 0.00 00:12:45.134 [2024-11-25T13:10:50.224Z] =================================================================================================================== 00:12:45.134 [2024-11-25T13:10:50.224Z] Total : 25451.00 99.42 0.00 0.00 0.00 0.00 0.00 00:12:45.134 00:12:45.134 00:12:45.134 Latency(us) 00:12:45.134 [2024-11-25T13:10:50.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.134 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:45.134 Nvme0n1 : 10.00 25456.01 99.44 0.00 0.00 5025.16 2211.84 14308.69 00:12:45.134 [2024-11-25T13:10:50.224Z] =================================================================================================================== 00:12:45.134 [2024-11-25T13:10:50.224Z] Total : 25456.01 99.44 0.00 0.00 5025.16 2211.84 14308.69 00:12:45.134 { 00:12:45.134 "results": [ 00:12:45.134 { 00:12:45.134 "job": "Nvme0n1", 00:12:45.134 "core_mask": "0x2", 00:12:45.134 "workload": "randwrite", 00:12:45.134 "status": "finished", 00:12:45.134 "queue_depth": 128, 00:12:45.134 "io_size": 4096, 00:12:45.134 "runtime": 10.003061, 00:12:45.134 "iops": 25456.00791597692, 00:12:45.134 "mibps": 99.43753092178484, 00:12:45.134 "io_failed": 0, 00:12:45.134 "io_timeout": 0, 00:12:45.134 "avg_latency_us": 5025.161187882406, 00:12:45.134 "min_latency_us": 2211.84, 00:12:45.134 "max_latency_us": 14308.693333333333 00:12:45.134 } 00:12:45.134 ], 00:12:45.134 "core_count": 1 00:12:45.134 } 00:12:45.134 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3250941 00:12:45.134 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3250941 ']' 00:12:45.134 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3250941 00:12:45.134 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:12:45.134 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:45.134 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3250941 00:12:45.134 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:45.134 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:45.134 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3250941' 00:12:45.134 killing process with pid 3250941 00:12:45.134 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3250941 00:12:45.134 Received shutdown signal, test time was about 10.000000 seconds 00:12:45.134 00:12:45.134 Latency(us) 00:12:45.134 [2024-11-25T13:10:50.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:45.134 [2024-11-25T13:10:50.224Z] =================================================================================================================== 00:12:45.134 [2024-11-25T13:10:50.224Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:45.134 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3250941 00:12:45.134 14:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:45.395 14:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:45.395 14:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae34fc26-f4d6-442a-9d00-0bf803af2d32 00:12:45.395 14:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:45.657 14:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:45.657 14:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:45.657 14:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:45.918 [2024-11-25 14:10:50.814394] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:45.918 14:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae34fc26-f4d6-442a-9d00-0bf803af2d32 00:12:45.918 14:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:12:45.918 14:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae34fc26-f4d6-442a-9d00-0bf803af2d32 00:12:45.918 14:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:45.918 14:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:45.918 14:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:45.918 14:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:45.918 14:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:45.918 14:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:45.918 14:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:45.918 14:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:45.919 14:10:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae34fc26-f4d6-442a-9d00-0bf803af2d32 00:12:46.180 request: 00:12:46.180 { 00:12:46.180 "uuid": "ae34fc26-f4d6-442a-9d00-0bf803af2d32", 00:12:46.180 "method": "bdev_lvol_get_lvstores", 00:12:46.180 "req_id": 1 00:12:46.180 } 00:12:46.180 Got JSON-RPC error response 00:12:46.180 response: 00:12:46.180 { 00:12:46.180 "code": -19, 00:12:46.180 "message": "No such device" 00:12:46.180 } 00:12:46.180 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:12:46.180 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:46.180 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:46.180 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:46.180 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:46.180 aio_bdev 00:12:46.180 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a42d0b89-93a8-45bd-938a-a66b516ad7dd 00:12:46.180 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=a42d0b89-93a8-45bd-938a-a66b516ad7dd 00:12:46.180 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:46.180 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:12:46.180 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:46.180 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:46.180 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:46.441 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a42d0b89-93a8-45bd-938a-a66b516ad7dd -t 2000 00:12:46.441 [ 00:12:46.441 { 00:12:46.441 "name": "a42d0b89-93a8-45bd-938a-a66b516ad7dd", 00:12:46.441 "aliases": [ 00:12:46.441 "lvs/lvol" 00:12:46.441 ], 00:12:46.441 "product_name": "Logical Volume", 00:12:46.441 "block_size": 4096, 00:12:46.441 "num_blocks": 38912, 00:12:46.441 "uuid": "a42d0b89-93a8-45bd-938a-a66b516ad7dd", 00:12:46.441 "assigned_rate_limits": { 00:12:46.441 "rw_ios_per_sec": 0, 00:12:46.441 "rw_mbytes_per_sec": 0, 00:12:46.441 "r_mbytes_per_sec": 0, 00:12:46.441 "w_mbytes_per_sec": 0 00:12:46.441 }, 00:12:46.441 "claimed": false, 00:12:46.441 "zoned": false, 00:12:46.441 "supported_io_types": { 00:12:46.441 "read": true, 00:12:46.441 "write": true, 00:12:46.441 "unmap": true, 00:12:46.441 "flush": false, 00:12:46.441 "reset": true, 00:12:46.441 "nvme_admin": false, 00:12:46.441 "nvme_io": false, 00:12:46.441 "nvme_io_md": false, 00:12:46.441 "write_zeroes": true, 00:12:46.441 "zcopy": false, 00:12:46.441 "get_zone_info": false, 00:12:46.441 "zone_management": false, 00:12:46.441 "zone_append": false, 00:12:46.441 "compare": false, 00:12:46.441 "compare_and_write": false, 00:12:46.441 "abort": false, 00:12:46.441 "seek_hole": true, 00:12:46.441 "seek_data": true, 00:12:46.441 "copy": false, 00:12:46.441 "nvme_iov_md": false 00:12:46.441 }, 00:12:46.441 "driver_specific": { 00:12:46.441 "lvol": { 00:12:46.441 "lvol_store_uuid": "ae34fc26-f4d6-442a-9d00-0bf803af2d32", 00:12:46.441 "base_bdev": "aio_bdev", 00:12:46.441 "thin_provision": false, 00:12:46.441 "num_allocated_clusters": 38, 00:12:46.441 "snapshot": false, 00:12:46.441 "clone": false, 00:12:46.441 "esnap_clone": false 00:12:46.441 } 00:12:46.441 } 00:12:46.441 } 00:12:46.441 ] 00:12:46.441 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:12:46.441 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae34fc26-f4d6-442a-9d00-0bf803af2d32 00:12:46.441 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:46.701 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:46.702 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ae34fc26-f4d6-442a-9d00-0bf803af2d32 00:12:46.702 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:46.962 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:46.962 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a42d0b89-93a8-45bd-938a-a66b516ad7dd 00:12:46.962 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ae34fc26-f4d6-442a-9d00-0bf803af2d32 00:12:47.224 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:47.486 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:47.486 00:12:47.486 real 0m15.969s 00:12:47.486 user 0m15.708s 00:12:47.486 sys 0m1.360s 00:12:47.486 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.486 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:47.486 ************************************ 00:12:47.486 END TEST lvs_grow_clean 00:12:47.486 ************************************ 00:12:47.486 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:47.486 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:47.486 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.486 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:47.486 ************************************ 00:12:47.486 START TEST lvs_grow_dirty 00:12:47.486 ************************************ 00:12:47.486 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:12:47.486 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:47.486 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:47.486 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:47.486 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:47.486 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:47.486 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:47.486 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:47.486 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:47.486 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:47.748 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:47.748 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:48.086 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8b9045dd-1ee4-4794-bd69-e28bdad625ad 00:12:48.086 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b9045dd-1ee4-4794-bd69-e28bdad625ad 00:12:48.086 14:10:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:48.086 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:48.086 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:48.086 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8b9045dd-1ee4-4794-bd69-e28bdad625ad lvol 150 00:12:48.382 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=2a366326-d872-4b3b-9398-7b27c507a120 00:12:48.382 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:48.382 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:48.382 [2024-11-25 14:10:53.384684] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:48.382 [2024-11-25 14:10:53.384723] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:48.382 true 00:12:48.382 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b9045dd-1ee4-4794-bd69-e28bdad625ad 00:12:48.382 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:48.642 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:48.642 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:48.642 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2a366326-d872-4b3b-9398-7b27c507a120 00:12:48.903 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:49.163 [2024-11-25 14:10:54.042615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.163 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:49.163 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:49.163 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3254252 00:12:49.163 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:49.163 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3254252 /var/tmp/bdevperf.sock 00:12:49.163 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3254252 ']' 00:12:49.163 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:49.163 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:49.163 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:49.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:49.164 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:49.164 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:49.424 [2024-11-25 14:10:54.257690] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:12:49.424 [2024-11-25 14:10:54.257742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3254252 ] 00:12:49.424 [2024-11-25 14:10:54.338611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.424 [2024-11-25 14:10:54.368249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.424 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:49.424 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:12:49.424 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:49.995 Nvme0n1 00:12:49.995 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:49.995 [ 00:12:49.995 { 00:12:49.995 "name": "Nvme0n1", 00:12:49.995 "aliases": [ 00:12:49.995 "2a366326-d872-4b3b-9398-7b27c507a120" 00:12:49.995 ], 00:12:49.995 "product_name": "NVMe disk", 00:12:49.995 "block_size": 4096, 00:12:49.995 "num_blocks": 38912, 00:12:49.995 "uuid": "2a366326-d872-4b3b-9398-7b27c507a120", 00:12:49.995 "numa_id": 0, 00:12:49.995 "assigned_rate_limits": { 00:12:49.995 "rw_ios_per_sec": 0, 00:12:49.995 "rw_mbytes_per_sec": 0, 00:12:49.995 "r_mbytes_per_sec": 0, 00:12:49.995 "w_mbytes_per_sec": 0 00:12:49.995 }, 00:12:49.995 "claimed": false, 00:12:49.995 "zoned": false, 00:12:49.995 "supported_io_types": { 00:12:49.995 "read": true, 00:12:49.995 "write": true, 00:12:49.995 "unmap": true, 00:12:49.995 "flush": true, 00:12:49.995 "reset": true, 00:12:49.995 "nvme_admin": true, 00:12:49.995 "nvme_io": true, 00:12:49.995 "nvme_io_md": false, 00:12:49.995 "write_zeroes": true, 00:12:49.995 "zcopy": false, 00:12:49.995 "get_zone_info": false, 00:12:49.995 "zone_management": false, 00:12:49.995 "zone_append": false, 00:12:49.995 "compare": true, 00:12:49.995 "compare_and_write": true, 00:12:49.995 "abort": true, 00:12:49.995 "seek_hole": false, 00:12:49.995 "seek_data": false, 00:12:49.995 "copy": true, 00:12:49.995 "nvme_iov_md": false 00:12:49.995 }, 00:12:49.995 "memory_domains": [ 00:12:49.995 { 00:12:49.995 "dma_device_id": "system", 00:12:49.995 "dma_device_type": 1 00:12:49.995 } 00:12:49.995 ], 00:12:49.995 "driver_specific": { 00:12:49.995 "nvme": [ 00:12:49.995 { 00:12:49.995 "trid": { 00:12:49.995 "trtype": "TCP", 00:12:49.995 "adrfam": "IPv4", 00:12:49.995 "traddr": "10.0.0.2", 00:12:49.995 "trsvcid": "4420", 00:12:49.995 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:49.995 }, 00:12:49.995 "ctrlr_data": { 00:12:49.995 "cntlid": 1, 00:12:49.995 "vendor_id": "0x8086", 00:12:49.995 "model_number": "SPDK bdev Controller", 00:12:49.995 "serial_number": "SPDK0", 00:12:49.995 "firmware_revision": "25.01", 00:12:49.995 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:49.995 "oacs": { 00:12:49.995 "security": 0, 00:12:49.995 "format": 0, 00:12:49.995 "firmware": 0, 00:12:49.995 "ns_manage": 0 00:12:49.995 }, 00:12:49.995 "multi_ctrlr": true, 00:12:49.995 "ana_reporting": false 00:12:49.995 }, 00:12:49.995 "vs": { 00:12:49.995 "nvme_version": "1.3" 00:12:49.995 }, 00:12:49.995 "ns_data": { 00:12:49.995 "id": 1, 00:12:49.995 "can_share": true 00:12:49.995 } 00:12:49.995 } 00:12:49.995 ], 00:12:49.995 "mp_policy": "active_passive" 00:12:49.995 } 00:12:49.995 } 00:12:49.995 ] 00:12:49.995 14:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3254369 00:12:49.995 14:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:49.995 14:10:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:50.255 Running I/O for 10 seconds... 00:12:51.197 Latency(us) 00:12:51.197 [2024-11-25T13:10:56.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.197 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:51.197 Nvme0n1 : 1.00 25048.00 97.84 0.00 0.00 0.00 0.00 0.00 00:12:51.197 [2024-11-25T13:10:56.287Z] =================================================================================================================== 00:12:51.197 [2024-11-25T13:10:56.287Z] Total : 25048.00 97.84 0.00 0.00 0.00 0.00 0.00 00:12:51.197 00:12:52.136 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8b9045dd-1ee4-4794-bd69-e28bdad625ad 00:12:52.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:52.136 Nvme0n1 : 2.00 25194.00 98.41 0.00 0.00 0.00 0.00 0.00 00:12:52.136 [2024-11-25T13:10:57.226Z] =================================================================================================================== 00:12:52.136 [2024-11-25T13:10:57.226Z] Total : 25194.00 98.41 0.00 0.00 0.00 0.00 0.00 00:12:52.136 00:12:52.136 true 00:12:52.136 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b9045dd-1ee4-4794-bd69-e28bdad625ad 00:12:52.136 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:52.397 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:52.397 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:52.397 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3254369 00:12:53.340 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:53.340 Nvme0n1 : 3.00 25294.33 98.81 0.00 0.00 0.00 0.00 0.00 00:12:53.340 [2024-11-25T13:10:58.430Z] =================================================================================================================== 00:12:53.340 [2024-11-25T13:10:58.430Z] Total : 25294.33 98.81 0.00 0.00 0.00 0.00 0.00 00:12:53.340 00:12:54.280 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:54.280 Nvme0n1 : 4.00 25354.50 99.04 0.00 0.00 0.00 0.00 0.00 00:12:54.280 [2024-11-25T13:10:59.370Z] =================================================================================================================== 00:12:54.280 [2024-11-25T13:10:59.370Z] Total : 25354.50 99.04 0.00 0.00 0.00 0.00 0.00 00:12:54.280 00:12:55.220 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:55.220 Nvme0n1 : 5.00 25381.40 99.15 0.00 0.00 0.00 0.00 0.00 00:12:55.220 [2024-11-25T13:11:00.310Z] =================================================================================================================== 00:12:55.220 [2024-11-25T13:11:00.310Z] Total : 25381.40 99.15 0.00 0.00 0.00 0.00 0.00 00:12:55.220 00:12:56.161 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:56.161 Nvme0n1 : 6.00 25414.83 99.28 0.00 0.00 0.00 0.00 0.00 00:12:56.161 [2024-11-25T13:11:01.251Z] =================================================================================================================== 00:12:56.161 [2024-11-25T13:11:01.251Z] Total : 25414.83 99.28 0.00 0.00 0.00 0.00 0.00 00:12:56.161 00:12:57.103 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:57.103 Nvme0n1 : 7.00 25441.00 99.38 0.00 0.00 0.00 0.00 0.00 00:12:57.103 [2024-11-25T13:11:02.193Z] =================================================================================================================== 00:12:57.103 [2024-11-25T13:11:02.193Z] Total : 25441.00 99.38 0.00 0.00 0.00 0.00 0.00 00:12:57.103 00:12:58.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:58.487 Nvme0n1 : 8.00 25461.00 99.46 0.00 0.00 0.00 0.00 0.00 00:12:58.487 [2024-11-25T13:11:03.577Z] =================================================================================================================== 00:12:58.487 [2024-11-25T13:11:03.577Z] Total : 25461.00 99.46 0.00 0.00 0.00 0.00 0.00 00:12:58.487 00:12:59.057 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:59.057 Nvme0n1 : 9.00 25476.33 99.52 0.00 0.00 0.00 0.00 0.00 00:12:59.057 [2024-11-25T13:11:04.147Z] =================================================================================================================== 00:12:59.057 [2024-11-25T13:11:04.147Z] Total : 25476.33 99.52 0.00 0.00 0.00 0.00 0.00 00:12:59.057 00:13:00.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:00.439 Nvme0n1 : 10.00 25494.90 99.59 0.00 0.00 0.00 0.00 0.00 00:13:00.439 [2024-11-25T13:11:05.529Z] =================================================================================================================== 00:13:00.439 [2024-11-25T13:11:05.529Z] Total : 25494.90 99.59 0.00 0.00 0.00 0.00 0.00 00:13:00.439 00:13:00.439 00:13:00.439 Latency(us) 00:13:00.439 [2024-11-25T13:11:05.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:00.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:00.439 Nvme0n1 : 10.00 25492.01 99.58 0.00 0.00 5017.77 3085.65 13161.81 00:13:00.439 [2024-11-25T13:11:05.529Z] =================================================================================================================== 00:13:00.439 [2024-11-25T13:11:05.529Z] Total : 25492.01 99.58 0.00 0.00 5017.77 3085.65 13161.81 00:13:00.439 { 00:13:00.439 "results": [ 00:13:00.439 { 00:13:00.439 "job": "Nvme0n1", 00:13:00.439 "core_mask": "0x2", 00:13:00.439 "workload": "randwrite", 00:13:00.439 "status": "finished", 00:13:00.439 "queue_depth": 128, 00:13:00.439 "io_size": 4096, 00:13:00.439 "runtime": 10.003682, 00:13:00.439 "iops": 25492.013840503925, 00:13:00.439 "mibps": 99.57817906446846, 00:13:00.439 "io_failed": 0, 00:13:00.439 "io_timeout": 0, 00:13:00.439 "avg_latency_us": 5017.774083828077, 00:13:00.439 "min_latency_us": 3085.653333333333, 00:13:00.439 "max_latency_us": 13161.813333333334 00:13:00.439 } 00:13:00.439 ], 00:13:00.439 "core_count": 1 00:13:00.439 } 00:13:00.439 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3254252 00:13:00.439 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3254252 ']' 00:13:00.439 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3254252 00:13:00.439 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:13:00.439 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:00.439 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3254252 00:13:00.439 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:00.439 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:00.439 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3254252' 00:13:00.439 killing process with pid 3254252 00:13:00.439 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3254252 00:13:00.439 Received shutdown signal, test time was about 10.000000 seconds 00:13:00.439 00:13:00.439 Latency(us) 00:13:00.439 [2024-11-25T13:11:05.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:00.439 [2024-11-25T13:11:05.529Z] =================================================================================================================== 00:13:00.439 [2024-11-25T13:11:05.529Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:00.439 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3254252 00:13:00.439 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:00.439 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:00.700 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:00.700 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b9045dd-1ee4-4794-bd69-e28bdad625ad 00:13:00.961 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:00.961 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:00.961 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3250392 00:13:00.961 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3250392 00:13:00.961 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3250392 Killed "${NVMF_APP[@]}" "$@" 00:13:00.961 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:00.961 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:00.961 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:00.961 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:00.961 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:00.961 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3256505 00:13:00.961 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3256505 00:13:00.961 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:00.961 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3256505 ']' 00:13:00.961 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.961 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:00.961 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.961 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:00.961 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:00.961 [2024-11-25 14:11:06.007336] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:13:00.961 [2024-11-25 14:11:06.007390] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.221 [2024-11-25 14:11:06.094705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.221 [2024-11-25 14:11:06.123777] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:01.221 [2024-11-25 14:11:06.123803] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:01.221 [2024-11-25 14:11:06.123809] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:01.221 [2024-11-25 14:11:06.123813] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:01.221 [2024-11-25 14:11:06.123818] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:01.221 [2024-11-25 14:11:06.124269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.792 14:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:01.792 14:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:13:01.792 14:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:01.792 14:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:01.792 14:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:01.792 14:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.792 14:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:02.052 [2024-11-25 14:11:06.981571] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:02.052 [2024-11-25 14:11:06.981643] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:02.052 [2024-11-25 14:11:06.981665] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:02.052 14:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:02.052 14:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 2a366326-d872-4b3b-9398-7b27c507a120 00:13:02.052 14:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=2a366326-d872-4b3b-9398-7b27c507a120 00:13:02.052 14:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:02.052 14:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:13:02.052 14:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:02.052 14:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:02.052 14:11:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:02.313 14:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2a366326-d872-4b3b-9398-7b27c507a120 -t 2000 00:13:02.313 [ 00:13:02.313 { 00:13:02.313 "name": "2a366326-d872-4b3b-9398-7b27c507a120", 00:13:02.313 "aliases": [ 00:13:02.313 "lvs/lvol" 00:13:02.313 ], 00:13:02.313 "product_name": "Logical Volume", 00:13:02.313 "block_size": 4096, 00:13:02.313 "num_blocks": 38912, 00:13:02.313 "uuid": "2a366326-d872-4b3b-9398-7b27c507a120", 00:13:02.313 "assigned_rate_limits": { 00:13:02.313 "rw_ios_per_sec": 0, 00:13:02.313 "rw_mbytes_per_sec": 0, 00:13:02.313 "r_mbytes_per_sec": 0, 00:13:02.313 "w_mbytes_per_sec": 0 00:13:02.313 }, 00:13:02.313 "claimed": false, 00:13:02.313 "zoned": false, 00:13:02.313 "supported_io_types": { 00:13:02.313 "read": true, 00:13:02.313 "write": true, 00:13:02.313 "unmap": true, 00:13:02.313 "flush": false, 00:13:02.313 "reset": true, 00:13:02.313 "nvme_admin": false, 00:13:02.313 "nvme_io": false, 00:13:02.313 "nvme_io_md": false, 00:13:02.313 "write_zeroes": true, 00:13:02.313 "zcopy": false, 00:13:02.313 "get_zone_info": false, 00:13:02.313 "zone_management": false, 00:13:02.313 "zone_append": false, 00:13:02.313 "compare": false, 00:13:02.313 "compare_and_write": false, 00:13:02.313 "abort": false, 00:13:02.313 "seek_hole": true, 00:13:02.313 "seek_data": true, 00:13:02.313 "copy": false, 00:13:02.313 "nvme_iov_md": false 00:13:02.313 }, 00:13:02.313 "driver_specific": { 00:13:02.313 "lvol": { 00:13:02.313 "lvol_store_uuid": "8b9045dd-1ee4-4794-bd69-e28bdad625ad", 00:13:02.313 "base_bdev": "aio_bdev", 00:13:02.313 "thin_provision": false, 00:13:02.313 "num_allocated_clusters": 38, 00:13:02.313 "snapshot": false, 00:13:02.313 "clone": false, 00:13:02.313 "esnap_clone": false 00:13:02.313 } 00:13:02.313 } 00:13:02.313 } 00:13:02.313 ] 00:13:02.313 14:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:13:02.313 14:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b9045dd-1ee4-4794-bd69-e28bdad625ad 00:13:02.313 14:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:02.574 14:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:02.574 14:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b9045dd-1ee4-4794-bd69-e28bdad625ad 00:13:02.574 14:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:02.574 14:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:02.574 14:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:02.835 [2024-11-25 14:11:07.810182] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:02.835 14:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b9045dd-1ee4-4794-bd69-e28bdad625ad 00:13:02.835 14:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:13:02.835 14:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b9045dd-1ee4-4794-bd69-e28bdad625ad 00:13:02.835 14:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:02.835 14:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:02.835 14:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:02.835 14:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:02.835 14:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:02.835 14:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:02.835 14:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:02.835 14:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:02.835 14:11:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b9045dd-1ee4-4794-bd69-e28bdad625ad 00:13:03.095 request: 00:13:03.095 { 00:13:03.095 "uuid": "8b9045dd-1ee4-4794-bd69-e28bdad625ad", 00:13:03.095 "method": "bdev_lvol_get_lvstores", 00:13:03.095 "req_id": 1 00:13:03.095 } 00:13:03.095 Got JSON-RPC error response 00:13:03.095 response: 00:13:03.095 { 00:13:03.095 "code": -19, 00:13:03.095 "message": "No such device" 00:13:03.095 } 00:13:03.095 14:11:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:13:03.096 14:11:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:03.096 14:11:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:03.096 14:11:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:03.096 14:11:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:03.356 aio_bdev 00:13:03.356 14:11:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2a366326-d872-4b3b-9398-7b27c507a120 00:13:03.356 14:11:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=2a366326-d872-4b3b-9398-7b27c507a120 00:13:03.356 14:11:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:03.356 14:11:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:13:03.356 14:11:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:03.356 14:11:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:03.356 14:11:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:03.356 14:11:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 2a366326-d872-4b3b-9398-7b27c507a120 -t 2000 00:13:03.617 [ 00:13:03.617 { 00:13:03.617 "name": "2a366326-d872-4b3b-9398-7b27c507a120", 00:13:03.617 "aliases": [ 00:13:03.617 "lvs/lvol" 00:13:03.617 ], 00:13:03.617 "product_name": "Logical Volume", 00:13:03.617 "block_size": 4096, 00:13:03.617 "num_blocks": 38912, 00:13:03.617 "uuid": "2a366326-d872-4b3b-9398-7b27c507a120", 00:13:03.617 "assigned_rate_limits": { 00:13:03.617 "rw_ios_per_sec": 0, 00:13:03.617 "rw_mbytes_per_sec": 0, 00:13:03.617 "r_mbytes_per_sec": 0, 00:13:03.617 "w_mbytes_per_sec": 0 00:13:03.617 }, 00:13:03.617 "claimed": false, 00:13:03.617 "zoned": false, 00:13:03.617 "supported_io_types": { 00:13:03.617 "read": true, 00:13:03.617 "write": true, 00:13:03.617 "unmap": true, 00:13:03.617 "flush": false, 00:13:03.617 "reset": true, 00:13:03.617 "nvme_admin": false, 00:13:03.617 "nvme_io": false, 00:13:03.617 "nvme_io_md": false, 00:13:03.617 "write_zeroes": true, 00:13:03.617 "zcopy": false, 00:13:03.617 "get_zone_info": false, 00:13:03.617 "zone_management": false, 00:13:03.617 "zone_append": false, 00:13:03.617 "compare": false, 00:13:03.617 "compare_and_write": false, 00:13:03.617 "abort": false, 00:13:03.617 "seek_hole": true, 00:13:03.617 "seek_data": true, 00:13:03.617 "copy": false, 00:13:03.617 "nvme_iov_md": false 00:13:03.617 }, 00:13:03.617 "driver_specific": { 00:13:03.617 "lvol": { 00:13:03.617 "lvol_store_uuid": "8b9045dd-1ee4-4794-bd69-e28bdad625ad", 00:13:03.618 "base_bdev": "aio_bdev", 00:13:03.618 "thin_provision": false, 00:13:03.618 "num_allocated_clusters": 38, 00:13:03.618 "snapshot": false, 00:13:03.618 "clone": false, 00:13:03.618 "esnap_clone": false 00:13:03.618 } 00:13:03.618 } 00:13:03.618 } 00:13:03.618 ] 00:13:03.618 14:11:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:13:03.618 14:11:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b9045dd-1ee4-4794-bd69-e28bdad625ad 00:13:03.618 14:11:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:03.877 14:11:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:03.877 14:11:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8b9045dd-1ee4-4794-bd69-e28bdad625ad 00:13:03.877 14:11:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:03.877 14:11:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:03.878 14:11:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2a366326-d872-4b3b-9398-7b27c507a120 00:13:04.174 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8b9045dd-1ee4-4794-bd69-e28bdad625ad 00:13:04.174 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:04.433 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:04.434 00:13:04.434 real 0m16.942s 00:13:04.434 user 0m44.594s 00:13:04.434 sys 0m3.004s 00:13:04.434 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:04.434 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:04.434 ************************************ 00:13:04.434 END TEST lvs_grow_dirty 00:13:04.434 ************************************ 00:13:04.434 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:04.434 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:13:04.434 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:13:04.434 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:13:04.434 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:04.434 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:13:04.434 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:13:04.434 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:13:04.434 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:04.434 nvmf_trace.0 00:13:04.694 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:13:04.694 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:04.694 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:04.694 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:13:04.694 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:04.694 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:13:04.694 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:04.694 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:04.694 rmmod nvme_tcp 00:13:04.694 rmmod nvme_fabrics 00:13:04.694 rmmod nvme_keyring 00:13:04.694 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:04.694 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:13:04.694 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:13:04.694 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3256505 ']' 00:13:04.694 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3256505 00:13:04.694 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3256505 ']' 00:13:04.694 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3256505 00:13:04.694 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:13:04.694 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:04.694 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3256505 00:13:04.694 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:04.694 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:04.694 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3256505' 00:13:04.694 killing process with pid 3256505 00:13:04.694 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3256505 00:13:04.694 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3256505 00:13:04.953 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:04.953 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:04.953 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:04.953 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:13:04.953 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:13:04.953 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:04.953 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:13:04.953 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:04.953 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:04.953 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.953 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.953 14:11:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.867 14:11:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:06.867 00:13:06.867 real 0m44.266s 00:13:06.867 user 1m6.591s 00:13:06.867 sys 0m10.513s 00:13:06.867 14:11:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.867 14:11:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:06.867 ************************************ 00:13:06.867 END TEST nvmf_lvs_grow 00:13:06.867 ************************************ 00:13:06.867 14:11:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:06.867 14:11:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:06.867 14:11:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.867 14:11:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:06.867 ************************************ 00:13:06.867 START TEST nvmf_bdev_io_wait 00:13:06.867 ************************************ 00:13:06.867 14:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:07.129 * Looking for test storage... 00:13:07.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.129 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:07.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.129 --rc genhtml_branch_coverage=1 00:13:07.129 --rc genhtml_function_coverage=1 00:13:07.129 --rc genhtml_legend=1 00:13:07.130 --rc geninfo_all_blocks=1 00:13:07.130 --rc geninfo_unexecuted_blocks=1 00:13:07.130 00:13:07.130 ' 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:07.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.130 --rc genhtml_branch_coverage=1 00:13:07.130 --rc genhtml_function_coverage=1 00:13:07.130 --rc genhtml_legend=1 00:13:07.130 --rc geninfo_all_blocks=1 00:13:07.130 --rc geninfo_unexecuted_blocks=1 00:13:07.130 00:13:07.130 ' 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:07.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.130 --rc genhtml_branch_coverage=1 00:13:07.130 --rc genhtml_function_coverage=1 00:13:07.130 --rc genhtml_legend=1 00:13:07.130 --rc geninfo_all_blocks=1 00:13:07.130 --rc geninfo_unexecuted_blocks=1 00:13:07.130 00:13:07.130 ' 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:07.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.130 --rc genhtml_branch_coverage=1 00:13:07.130 --rc genhtml_function_coverage=1 00:13:07.130 --rc genhtml_legend=1 00:13:07.130 --rc geninfo_all_blocks=1 00:13:07.130 --rc geninfo_unexecuted_blocks=1 00:13:07.130 00:13:07.130 ' 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:07.130 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:07.130 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.131 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.131 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.131 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:07.131 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:07.131 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:13:07.131 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:15.276 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:15.276 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:13:15.276 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:15.276 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:15.276 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:15.276 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:15.276 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:15.276 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:13:15.276 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:15.276 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:13:15.276 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:13:15.276 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:13:15.276 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:13:15.276 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:15.277 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:15.277 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:15.277 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:15.277 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:15.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:15.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:13:15.277 00:13:15.277 --- 10.0.0.2 ping statistics --- 00:13:15.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.277 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:15.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:15.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:13:15.277 00:13:15.277 --- 10.0.0.1 ping statistics --- 00:13:15.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:15.277 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:15.277 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:15.278 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:15.278 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:15.278 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3261487 00:13:15.278 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3261487 00:13:15.278 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:15.278 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3261487 ']' 00:13:15.278 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.278 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:15.278 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.278 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:15.278 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:15.278 [2024-11-25 14:11:19.722417] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:13:15.278 [2024-11-25 14:11:19.722484] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.278 [2024-11-25 14:11:19.821952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:15.278 [2024-11-25 14:11:19.876770] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.278 [2024-11-25 14:11:19.876825] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.278 [2024-11-25 14:11:19.876834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.278 [2024-11-25 14:11:19.876841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.278 [2024-11-25 14:11:19.876847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.278 [2024-11-25 14:11:19.878935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.278 [2024-11-25 14:11:19.879100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.278 [2024-11-25 14:11:19.879236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:15.278 [2024-11-25 14:11:19.879260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.540 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.540 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:13:15.540 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:15.540 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:15.540 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:15.540 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.540 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:15.540 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.540 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:15.540 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.540 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:15.540 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.540 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:15.804 [2024-11-25 14:11:20.678872] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:15.804 Malloc0 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:15.804 [2024-11-25 14:11:20.744293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3261832 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3261834 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:15.804 { 00:13:15.804 "params": { 00:13:15.804 "name": "Nvme$subsystem", 00:13:15.804 "trtype": "$TEST_TRANSPORT", 00:13:15.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:15.804 "adrfam": "ipv4", 00:13:15.804 "trsvcid": "$NVMF_PORT", 00:13:15.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:15.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:15.804 "hdgst": ${hdgst:-false}, 00:13:15.804 "ddgst": ${ddgst:-false} 00:13:15.804 }, 00:13:15.804 "method": "bdev_nvme_attach_controller" 00:13:15.804 } 00:13:15.804 EOF 00:13:15.804 )") 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3261836 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:15.804 { 00:13:15.804 "params": { 00:13:15.804 "name": "Nvme$subsystem", 00:13:15.804 "trtype": "$TEST_TRANSPORT", 00:13:15.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:15.804 "adrfam": "ipv4", 00:13:15.804 "trsvcid": "$NVMF_PORT", 00:13:15.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:15.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:15.804 "hdgst": ${hdgst:-false}, 00:13:15.804 "ddgst": ${ddgst:-false} 00:13:15.804 }, 00:13:15.804 "method": "bdev_nvme_attach_controller" 00:13:15.804 } 00:13:15.804 EOF 00:13:15.804 )") 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3261839 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:15.804 { 00:13:15.804 "params": { 00:13:15.804 "name": "Nvme$subsystem", 00:13:15.804 "trtype": "$TEST_TRANSPORT", 00:13:15.804 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:15.804 "adrfam": "ipv4", 00:13:15.804 "trsvcid": "$NVMF_PORT", 00:13:15.804 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:15.804 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:15.804 "hdgst": ${hdgst:-false}, 00:13:15.804 "ddgst": ${ddgst:-false} 00:13:15.804 }, 00:13:15.804 "method": "bdev_nvme_attach_controller" 00:13:15.804 } 00:13:15.804 EOF 00:13:15.804 )") 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:15.804 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:15.804 { 00:13:15.804 "params": { 00:13:15.804 "name": "Nvme$subsystem", 00:13:15.805 "trtype": "$TEST_TRANSPORT", 00:13:15.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:15.805 "adrfam": "ipv4", 00:13:15.805 "trsvcid": "$NVMF_PORT", 00:13:15.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:15.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:15.805 "hdgst": ${hdgst:-false}, 00:13:15.805 "ddgst": ${ddgst:-false} 00:13:15.805 }, 00:13:15.805 "method": "bdev_nvme_attach_controller" 00:13:15.805 } 00:13:15.805 EOF 00:13:15.805 )") 00:13:15.805 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:13:15.805 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3261832 00:13:15.805 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:13:15.805 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:13:15.805 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:13:15.805 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:13:15.805 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:15.805 "params": { 00:13:15.805 "name": "Nvme1", 00:13:15.805 "trtype": "tcp", 00:13:15.805 "traddr": "10.0.0.2", 00:13:15.805 "adrfam": "ipv4", 00:13:15.805 "trsvcid": "4420", 00:13:15.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:15.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:15.805 "hdgst": false, 00:13:15.805 "ddgst": false 00:13:15.805 }, 00:13:15.805 "method": "bdev_nvme_attach_controller" 00:13:15.805 }' 00:13:15.805 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:13:15.805 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:13:15.805 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:13:15.805 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:15.805 "params": { 00:13:15.805 "name": "Nvme1", 00:13:15.805 "trtype": "tcp", 00:13:15.805 "traddr": "10.0.0.2", 00:13:15.805 "adrfam": "ipv4", 00:13:15.805 "trsvcid": "4420", 00:13:15.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:15.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:15.805 "hdgst": false, 00:13:15.805 "ddgst": false 00:13:15.805 }, 00:13:15.805 "method": "bdev_nvme_attach_controller" 00:13:15.805 }' 00:13:15.805 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:13:15.805 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:15.805 "params": { 00:13:15.805 "name": "Nvme1", 00:13:15.805 "trtype": "tcp", 00:13:15.805 "traddr": "10.0.0.2", 00:13:15.805 "adrfam": "ipv4", 00:13:15.805 "trsvcid": "4420", 00:13:15.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:15.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:15.805 "hdgst": false, 00:13:15.805 "ddgst": false 00:13:15.805 }, 00:13:15.805 "method": "bdev_nvme_attach_controller" 00:13:15.805 }' 00:13:15.805 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:13:15.805 14:11:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:15.805 "params": { 00:13:15.805 "name": "Nvme1", 00:13:15.805 "trtype": "tcp", 00:13:15.805 "traddr": "10.0.0.2", 00:13:15.805 "adrfam": "ipv4", 00:13:15.805 "trsvcid": "4420", 00:13:15.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:15.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:15.805 "hdgst": false, 00:13:15.805 "ddgst": false 00:13:15.805 }, 00:13:15.805 "method": "bdev_nvme_attach_controller" 00:13:15.805 }' 00:13:15.805 [2024-11-25 14:11:20.802784] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:13:15.805 [2024-11-25 14:11:20.802858] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:15.805 [2024-11-25 14:11:20.807313] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:13:15.805 [2024-11-25 14:11:20.807378] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:15.805 [2024-11-25 14:11:20.809650] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:13:15.805 [2024-11-25 14:11:20.809724] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:15.805 [2024-11-25 14:11:20.817097] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:13:15.805 [2024-11-25 14:11:20.817166] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:16.067 [2024-11-25 14:11:21.015422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.067 [2024-11-25 14:11:21.055085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:16.067 [2024-11-25 14:11:21.107852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.067 [2024-11-25 14:11:21.147673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:13:16.329 [2024-11-25 14:11:21.201224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.329 [2024-11-25 14:11:21.238092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:16.329 [2024-11-25 14:11:21.270095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.329 [2024-11-25 14:11:21.307817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:16.329 Running I/O for 1 seconds... 00:13:16.590 Running I/O for 1 seconds... 00:13:16.590 Running I/O for 1 seconds... 00:13:16.590 Running I/O for 1 seconds... 00:13:17.534 11021.00 IOPS, 43.05 MiB/s 00:13:17.534 Latency(us) 00:13:17.534 [2024-11-25T13:11:22.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.534 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:17.534 Nvme1n1 : 1.01 11063.11 43.22 0.00 0.00 11523.02 6498.99 18786.99 00:13:17.534 [2024-11-25T13:11:22.624Z] =================================================================================================================== 00:13:17.534 [2024-11-25T13:11:22.624Z] Total : 11063.11 43.22 0.00 0.00 11523.02 6498.99 18786.99 00:13:17.534 8755.00 IOPS, 34.20 MiB/s 00:13:17.534 Latency(us) 00:13:17.534 [2024-11-25T13:11:22.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.534 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:17.534 Nvme1n1 : 1.01 8828.41 34.49 0.00 0.00 14440.35 6335.15 25340.59 00:13:17.534 [2024-11-25T13:11:22.624Z] =================================================================================================================== 00:13:17.534 [2024-11-25T13:11:22.624Z] Total : 8828.41 34.49 0.00 0.00 14440.35 6335.15 25340.59 00:13:17.534 10762.00 IOPS, 42.04 MiB/s 00:13:17.534 Latency(us) 00:13:17.534 [2024-11-25T13:11:22.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.534 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:17.534 Nvme1n1 : 1.01 10849.02 42.38 0.00 0.00 11760.80 4341.76 21954.56 00:13:17.534 [2024-11-25T13:11:22.624Z] =================================================================================================================== 00:13:17.534 [2024-11-25T13:11:22.624Z] Total : 10849.02 42.38 0.00 0.00 11760.80 4341.76 21954.56 00:13:17.534 181816.00 IOPS, 710.22 MiB/s 00:13:17.534 Latency(us) 00:13:17.534 [2024-11-25T13:11:22.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.534 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:17.534 Nvme1n1 : 1.00 181458.18 708.82 0.00 0.00 701.58 302.08 1966.08 00:13:17.534 [2024-11-25T13:11:22.624Z] =================================================================================================================== 00:13:17.534 [2024-11-25T13:11:22.624Z] Total : 181458.18 708.82 0.00 0.00 701.58 302.08 1966.08 00:13:17.534 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3261834 00:13:17.534 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3261836 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3261839 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:17.796 rmmod nvme_tcp 00:13:17.796 rmmod nvme_fabrics 00:13:17.796 rmmod nvme_keyring 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3261487 ']' 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3261487 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3261487 ']' 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3261487 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3261487 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3261487' 00:13:17.796 killing process with pid 3261487 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3261487 00:13:17.796 14:11:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3261487 00:13:18.058 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:18.058 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:18.058 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:18.058 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:13:18.058 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:13:18.058 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:18.058 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:13:18.058 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:18.058 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:18.058 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.058 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.058 14:11:23 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:20.607 00:13:20.607 real 0m13.146s 00:13:20.607 user 0m20.050s 00:13:20.607 sys 0m7.483s 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:20.607 ************************************ 00:13:20.607 END TEST nvmf_bdev_io_wait 00:13:20.607 ************************************ 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:20.607 ************************************ 00:13:20.607 START TEST nvmf_queue_depth 00:13:20.607 ************************************ 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:20.607 * Looking for test storage... 00:13:20.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:20.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.607 --rc genhtml_branch_coverage=1 00:13:20.607 --rc genhtml_function_coverage=1 00:13:20.607 --rc genhtml_legend=1 00:13:20.607 --rc geninfo_all_blocks=1 00:13:20.607 --rc geninfo_unexecuted_blocks=1 00:13:20.607 00:13:20.607 ' 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:20.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.607 --rc genhtml_branch_coverage=1 00:13:20.607 --rc genhtml_function_coverage=1 00:13:20.607 --rc genhtml_legend=1 00:13:20.607 --rc geninfo_all_blocks=1 00:13:20.607 --rc geninfo_unexecuted_blocks=1 00:13:20.607 00:13:20.607 ' 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:20.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.607 --rc genhtml_branch_coverage=1 00:13:20.607 --rc genhtml_function_coverage=1 00:13:20.607 --rc genhtml_legend=1 00:13:20.607 --rc geninfo_all_blocks=1 00:13:20.607 --rc geninfo_unexecuted_blocks=1 00:13:20.607 00:13:20.607 ' 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:20.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.607 --rc genhtml_branch_coverage=1 00:13:20.607 --rc genhtml_function_coverage=1 00:13:20.607 --rc genhtml_legend=1 00:13:20.607 --rc geninfo_all_blocks=1 00:13:20.607 --rc geninfo_unexecuted_blocks=1 00:13:20.607 00:13:20.607 ' 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.607 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:20.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:13:20.608 14:11:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.794 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:28.795 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:28.795 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:28.795 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:28.795 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:28.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:28.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:13:28.795 00:13:28.795 --- 10.0.0.2 ping statistics --- 00:13:28.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.795 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:13:28.795 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:28.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:28.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:13:28.796 00:13:28.796 --- 10.0.0.1 ping statistics --- 00:13:28.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.796 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:13:28.796 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:28.796 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:13:28.796 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:28.796 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:28.796 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:28.796 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:28.796 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:28.796 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:28.796 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:28.796 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:28.796 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:28.796 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:28.796 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.796 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3266536 00:13:28.796 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3266536 00:13:28.796 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:28.796 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3266536 ']' 00:13:28.796 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.796 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.796 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.796 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.796 14:11:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.796 [2024-11-25 14:11:33.008235] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:13:28.796 [2024-11-25 14:11:33.008299] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.796 [2024-11-25 14:11:33.088085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.796 [2024-11-25 14:11:33.133275] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.796 [2024-11-25 14:11:33.133328] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.796 [2024-11-25 14:11:33.133334] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.796 [2024-11-25 14:11:33.133340] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.796 [2024-11-25 14:11:33.133344] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.796 [2024-11-25 14:11:33.134013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.796 [2024-11-25 14:11:33.289072] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.796 Malloc0 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.796 [2024-11-25 14:11:33.349257] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3266561 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3266561 /var/tmp/bdevperf.sock 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3266561 ']' 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:28.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.796 14:11:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:28.796 [2024-11-25 14:11:33.410598] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:13:28.796 [2024-11-25 14:11:33.410690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3266561 ] 00:13:28.796 [2024-11-25 14:11:33.507608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.796 [2024-11-25 14:11:33.561964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.368 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.368 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:13:29.368 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:29.368 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.368 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:29.629 NVMe0n1 00:13:29.629 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.629 14:11:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:29.629 Running I/O for 10 seconds... 00:13:31.599 10565.00 IOPS, 41.27 MiB/s [2024-11-25T13:11:37.629Z] 10905.00 IOPS, 42.60 MiB/s [2024-11-25T13:11:39.010Z] 11260.67 IOPS, 43.99 MiB/s [2024-11-25T13:11:39.580Z] 11511.75 IOPS, 44.97 MiB/s [2024-11-25T13:11:40.964Z] 11874.00 IOPS, 46.38 MiB/s [2024-11-25T13:11:41.641Z] 12112.67 IOPS, 47.32 MiB/s [2024-11-25T13:11:42.582Z] 12294.71 IOPS, 48.03 MiB/s [2024-11-25T13:11:43.969Z] 12528.75 IOPS, 48.94 MiB/s [2024-11-25T13:11:44.911Z] 12632.56 IOPS, 49.35 MiB/s [2024-11-25T13:11:44.911Z] 12790.40 IOPS, 49.96 MiB/s 00:13:39.821 Latency(us) 00:13:39.821 [2024-11-25T13:11:44.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.821 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:39.821 Verification LBA range: start 0x0 length 0x4000 00:13:39.821 NVMe0n1 : 10.06 12816.16 50.06 0.00 0.00 79639.97 25231.36 65099.09 00:13:39.821 [2024-11-25T13:11:44.911Z] =================================================================================================================== 00:13:39.821 [2024-11-25T13:11:44.911Z] Total : 12816.16 50.06 0.00 0.00 79639.97 25231.36 65099.09 00:13:39.821 { 00:13:39.821 "results": [ 00:13:39.821 { 00:13:39.821 "job": "NVMe0n1", 00:13:39.821 "core_mask": "0x1", 00:13:39.821 "workload": "verify", 00:13:39.821 "status": "finished", 00:13:39.821 "verify_range": { 00:13:39.821 "start": 0, 00:13:39.821 "length": 16384 00:13:39.821 }, 00:13:39.821 "queue_depth": 1024, 00:13:39.821 "io_size": 4096, 00:13:39.821 "runtime": 10.059799, 00:13:39.821 "iops": 12816.160640982986, 00:13:39.821 "mibps": 50.06312750383979, 00:13:39.822 "io_failed": 0, 00:13:39.822 "io_timeout": 0, 00:13:39.822 "avg_latency_us": 79639.96915694549, 00:13:39.822 "min_latency_us": 25231.36, 00:13:39.822 "max_latency_us": 65099.09333333333 00:13:39.822 } 00:13:39.822 ], 00:13:39.822 "core_count": 1 00:13:39.822 } 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3266561 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3266561 ']' 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3266561 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3266561 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3266561' 00:13:39.822 killing process with pid 3266561 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3266561 00:13:39.822 Received shutdown signal, test time was about 10.000000 seconds 00:13:39.822 00:13:39.822 Latency(us) 00:13:39.822 [2024-11-25T13:11:44.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.822 [2024-11-25T13:11:44.912Z] =================================================================================================================== 00:13:39.822 [2024-11-25T13:11:44.912Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3266561 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:39.822 rmmod nvme_tcp 00:13:39.822 rmmod nvme_fabrics 00:13:39.822 rmmod nvme_keyring 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3266536 ']' 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3266536 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3266536 ']' 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3266536 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:39.822 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3266536 00:13:40.084 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:40.084 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:40.084 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3266536' 00:13:40.084 killing process with pid 3266536 00:13:40.084 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3266536 00:13:40.084 14:11:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3266536 00:13:40.084 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:40.084 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:40.084 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:40.084 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:13:40.084 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:13:40.084 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:40.084 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:13:40.084 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:40.084 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:40.084 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.084 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.084 14:11:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.628 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:42.628 00:13:42.628 real 0m21.971s 00:13:42.628 user 0m25.118s 00:13:42.628 sys 0m7.130s 00:13:42.628 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.628 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:42.628 ************************************ 00:13:42.628 END TEST nvmf_queue_depth 00:13:42.628 ************************************ 00:13:42.628 14:11:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:42.628 14:11:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:42.628 14:11:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.628 14:11:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:42.628 ************************************ 00:13:42.628 START TEST nvmf_target_multipath 00:13:42.628 ************************************ 00:13:42.628 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:42.628 * Looking for test storage... 00:13:42.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:42.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.629 --rc genhtml_branch_coverage=1 00:13:42.629 --rc genhtml_function_coverage=1 00:13:42.629 --rc genhtml_legend=1 00:13:42.629 --rc geninfo_all_blocks=1 00:13:42.629 --rc geninfo_unexecuted_blocks=1 00:13:42.629 00:13:42.629 ' 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:42.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.629 --rc genhtml_branch_coverage=1 00:13:42.629 --rc genhtml_function_coverage=1 00:13:42.629 --rc genhtml_legend=1 00:13:42.629 --rc geninfo_all_blocks=1 00:13:42.629 --rc geninfo_unexecuted_blocks=1 00:13:42.629 00:13:42.629 ' 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:42.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.629 --rc genhtml_branch_coverage=1 00:13:42.629 --rc genhtml_function_coverage=1 00:13:42.629 --rc genhtml_legend=1 00:13:42.629 --rc geninfo_all_blocks=1 00:13:42.629 --rc geninfo_unexecuted_blocks=1 00:13:42.629 00:13:42.629 ' 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:42.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.629 --rc genhtml_branch_coverage=1 00:13:42.629 --rc genhtml_function_coverage=1 00:13:42.629 --rc genhtml_legend=1 00:13:42.629 --rc geninfo_all_blocks=1 00:13:42.629 --rc geninfo_unexecuted_blocks=1 00:13:42.629 00:13:42.629 ' 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:42.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:42.629 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:42.630 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:42.630 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:42.630 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:42.630 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:42.630 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:42.630 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:42.630 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.630 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:42.630 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:42.630 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:42.630 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.630 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.630 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.630 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:42.630 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:42.630 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:13:42.630 14:11:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:50.767 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:50.767 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:13:50.767 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:50.767 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:50.767 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:50.767 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:50.767 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:50.767 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:13:50.767 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:50.767 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:13:50.767 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:13:50.767 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:13:50.767 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:13:50.767 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:13:50.767 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:13:50.767 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:50.768 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:50.768 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:50.768 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:50.768 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:50.768 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:50.769 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:50.769 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:50.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:50.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:13:50.769 00:13:50.769 --- 10.0.0.2 ping statistics --- 00:13:50.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.769 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:13:50.769 14:11:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:50.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:50.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:13:50.769 00:13:50.769 --- 10.0.0.1 ping statistics --- 00:13:50.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.769 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:13:50.769 only one NIC for nvmf test 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:50.769 rmmod nvme_tcp 00:13:50.769 rmmod nvme_fabrics 00:13:50.769 rmmod nvme_keyring 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:50.769 14:11:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.156 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:52.156 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:13:52.156 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:13:52.156 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:52.156 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:13:52.156 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:52.156 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:13:52.156 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:52.156 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:52.156 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:52.156 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:13:52.156 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:13:52.156 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:52.156 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:52.156 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:52.156 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:52.156 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:13:52.156 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:13:52.156 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:52.156 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:13:52.156 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:52.156 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:52.156 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.156 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.156 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.418 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:52.418 00:13:52.418 real 0m10.014s 00:13:52.418 user 0m2.197s 00:13:52.418 sys 0m5.775s 00:13:52.418 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:52.418 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:52.418 ************************************ 00:13:52.418 END TEST nvmf_target_multipath 00:13:52.418 ************************************ 00:13:52.418 14:11:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:52.418 14:11:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:52.418 14:11:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.418 14:11:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:52.418 ************************************ 00:13:52.418 START TEST nvmf_zcopy 00:13:52.418 ************************************ 00:13:52.418 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:52.418 * Looking for test storage... 00:13:52.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:52.418 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:52.418 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:13:52.418 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:52.679 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:52.679 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:52.679 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:52.679 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:52.679 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:13:52.679 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:13:52.679 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:13:52.679 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:13:52.679 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:13:52.679 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:13:52.679 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:13:52.679 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:52.679 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:13:52.679 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:13:52.679 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:52.679 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:52.679 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:13:52.679 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:13:52.679 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:52.679 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:13:52.679 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:13:52.679 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:13:52.679 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:52.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.680 --rc genhtml_branch_coverage=1 00:13:52.680 --rc genhtml_function_coverage=1 00:13:52.680 --rc genhtml_legend=1 00:13:52.680 --rc geninfo_all_blocks=1 00:13:52.680 --rc geninfo_unexecuted_blocks=1 00:13:52.680 00:13:52.680 ' 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:52.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.680 --rc genhtml_branch_coverage=1 00:13:52.680 --rc genhtml_function_coverage=1 00:13:52.680 --rc genhtml_legend=1 00:13:52.680 --rc geninfo_all_blocks=1 00:13:52.680 --rc geninfo_unexecuted_blocks=1 00:13:52.680 00:13:52.680 ' 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:52.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.680 --rc genhtml_branch_coverage=1 00:13:52.680 --rc genhtml_function_coverage=1 00:13:52.680 --rc genhtml_legend=1 00:13:52.680 --rc geninfo_all_blocks=1 00:13:52.680 --rc geninfo_unexecuted_blocks=1 00:13:52.680 00:13:52.680 ' 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:52.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.680 --rc genhtml_branch_coverage=1 00:13:52.680 --rc genhtml_function_coverage=1 00:13:52.680 --rc genhtml_legend=1 00:13:52.680 --rc geninfo_all_blocks=1 00:13:52.680 --rc geninfo_unexecuted_blocks=1 00:13:52.680 00:13:52.680 ' 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:52.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.680 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:52.681 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:52.681 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:13:52.681 14:11:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:00.826 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:00.826 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.826 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:00.827 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:00.827 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:00.827 14:12:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:00.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.742 ms 00:14:00.827 00:14:00.827 --- 10.0.0.2 ping statistics --- 00:14:00.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.827 rtt min/avg/max/mdev = 0.742/0.742/0.742/0.000 ms 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:00.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:14:00.827 00:14:00.827 --- 10.0.0.1 ping statistics --- 00:14:00.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.827 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3277364 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3277364 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3277364 ']' 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:00.827 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:00.827 [2024-11-25 14:12:05.166889] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:14:00.827 [2024-11-25 14:12:05.166954] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.827 [2024-11-25 14:12:05.268335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.827 [2024-11-25 14:12:05.319385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.827 [2024-11-25 14:12:05.319435] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.827 [2024-11-25 14:12:05.319444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.827 [2024-11-25 14:12:05.319452] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.827 [2024-11-25 14:12:05.319458] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.827 [2024-11-25 14:12:05.320211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.089 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:01.089 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:14:01.089 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:01.089 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:01.089 14:12:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:01.089 [2024-11-25 14:12:06.049974] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:01.089 [2024-11-25 14:12:06.074303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:01.089 malloc0 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:01.089 { 00:14:01.089 "params": { 00:14:01.089 "name": "Nvme$subsystem", 00:14:01.089 "trtype": "$TEST_TRANSPORT", 00:14:01.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:01.089 "adrfam": "ipv4", 00:14:01.089 "trsvcid": "$NVMF_PORT", 00:14:01.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:01.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:01.089 "hdgst": ${hdgst:-false}, 00:14:01.089 "ddgst": ${ddgst:-false} 00:14:01.089 }, 00:14:01.089 "method": "bdev_nvme_attach_controller" 00:14:01.089 } 00:14:01.089 EOF 00:14:01.089 )") 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:14:01.089 14:12:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:01.089 "params": { 00:14:01.089 "name": "Nvme1", 00:14:01.089 "trtype": "tcp", 00:14:01.089 "traddr": "10.0.0.2", 00:14:01.089 "adrfam": "ipv4", 00:14:01.089 "trsvcid": "4420", 00:14:01.089 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:01.089 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:01.089 "hdgst": false, 00:14:01.089 "ddgst": false 00:14:01.089 }, 00:14:01.089 "method": "bdev_nvme_attach_controller" 00:14:01.089 }' 00:14:01.089 [2024-11-25 14:12:06.174629] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:14:01.089 [2024-11-25 14:12:06.174699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3277715 ] 00:14:01.350 [2024-11-25 14:12:06.266620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.350 [2024-11-25 14:12:06.320224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.611 Running I/O for 10 seconds... 00:14:03.574 6453.00 IOPS, 50.41 MiB/s [2024-11-25T13:12:10.047Z] 6746.50 IOPS, 52.71 MiB/s [2024-11-25T13:12:10.988Z] 7740.00 IOPS, 60.47 MiB/s [2024-11-25T13:12:11.927Z] 8235.50 IOPS, 64.34 MiB/s [2024-11-25T13:12:12.868Z] 8542.60 IOPS, 66.74 MiB/s [2024-11-25T13:12:13.808Z] 8743.33 IOPS, 68.31 MiB/s [2024-11-25T13:12:14.749Z] 8886.14 IOPS, 69.42 MiB/s [2024-11-25T13:12:15.692Z] 8993.12 IOPS, 70.26 MiB/s [2024-11-25T13:12:17.076Z] 9076.78 IOPS, 70.91 MiB/s [2024-11-25T13:12:17.076Z] 9142.00 IOPS, 71.42 MiB/s 00:14:11.986 Latency(us) 00:14:11.986 [2024-11-25T13:12:17.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.986 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:11.986 Verification LBA range: start 0x0 length 0x1000 00:14:11.986 Nvme1n1 : 10.01 9145.87 71.45 0.00 0.00 13949.11 2484.91 28835.84 00:14:11.986 [2024-11-25T13:12:17.076Z] =================================================================================================================== 00:14:11.986 [2024-11-25T13:12:17.076Z] Total : 9145.87 71.45 0.00 0.00 13949.11 2484.91 28835.84 00:14:11.986 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3280199 00:14:11.986 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:11.986 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:11.986 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:11.986 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:11.986 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:14:11.986 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:14:11.986 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:11.986 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:11.986 { 00:14:11.986 "params": { 00:14:11.986 "name": "Nvme$subsystem", 00:14:11.986 "trtype": "$TEST_TRANSPORT", 00:14:11.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:11.986 "adrfam": "ipv4", 00:14:11.986 "trsvcid": "$NVMF_PORT", 00:14:11.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:11.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:11.986 "hdgst": ${hdgst:-false}, 00:14:11.986 "ddgst": ${ddgst:-false} 00:14:11.986 }, 00:14:11.986 "method": "bdev_nvme_attach_controller" 00:14:11.986 } 00:14:11.986 EOF 00:14:11.986 )") 00:14:11.986 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:14:11.986 [2024-11-25 14:12:16.795777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.986 [2024-11-25 14:12:16.795808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.986 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:14:11.986 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:14:11.986 14:12:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:11.986 "params": { 00:14:11.986 "name": "Nvme1", 00:14:11.986 "trtype": "tcp", 00:14:11.986 "traddr": "10.0.0.2", 00:14:11.986 "adrfam": "ipv4", 00:14:11.986 "trsvcid": "4420", 00:14:11.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:11.986 "hdgst": false, 00:14:11.986 "ddgst": false 00:14:11.986 }, 00:14:11.986 "method": "bdev_nvme_attach_controller" 00:14:11.986 }' 00:14:11.986 [2024-11-25 14:12:16.807769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.986 [2024-11-25 14:12:16.807779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.986 [2024-11-25 14:12:16.819797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.986 [2024-11-25 14:12:16.819805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.986 [2024-11-25 14:12:16.831826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.986 [2024-11-25 14:12:16.831834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.986 [2024-11-25 14:12:16.843855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.986 [2024-11-25 14:12:16.843863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.986 [2024-11-25 14:12:16.847096] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:14:11.986 [2024-11-25 14:12:16.847157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3280199 ] 00:14:11.986 [2024-11-25 14:12:16.855887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.986 [2024-11-25 14:12:16.855895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.986 [2024-11-25 14:12:16.867915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.986 [2024-11-25 14:12:16.867923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.986 [2024-11-25 14:12:16.879947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.986 [2024-11-25 14:12:16.879955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.986 [2024-11-25 14:12:16.891977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.986 [2024-11-25 14:12:16.891985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.986 [2024-11-25 14:12:16.904006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.986 [2024-11-25 14:12:16.904014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.986 [2024-11-25 14:12:16.916037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.986 [2024-11-25 14:12:16.916045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.986 [2024-11-25 14:12:16.928068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.986 [2024-11-25 14:12:16.928076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.986 [2024-11-25 14:12:16.930084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.986 [2024-11-25 14:12:16.940098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.986 [2024-11-25 14:12:16.940106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.986 [2024-11-25 14:12:16.952127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.986 [2024-11-25 14:12:16.952136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.986 [2024-11-25 14:12:16.959710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.986 [2024-11-25 14:12:16.964160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.986 [2024-11-25 14:12:16.964171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.986 [2024-11-25 14:12:16.976198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.986 [2024-11-25 14:12:16.976209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.986 [2024-11-25 14:12:16.988226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.986 [2024-11-25 14:12:16.988239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.986 [2024-11-25 14:12:17.000254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.986 [2024-11-25 14:12:17.000265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.986 [2024-11-25 14:12:17.012284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.986 [2024-11-25 14:12:17.012293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.986 [2024-11-25 14:12:17.024315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.986 [2024-11-25 14:12:17.024323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.986 [2024-11-25 14:12:17.036359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.986 [2024-11-25 14:12:17.036376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.986 [2024-11-25 14:12:17.048380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.986 [2024-11-25 14:12:17.048390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.986 [2024-11-25 14:12:17.060410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.986 [2024-11-25 14:12:17.060419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.986 [2024-11-25 14:12:17.072437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.986 [2024-11-25 14:12:17.072445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.247 [2024-11-25 14:12:17.084469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.247 [2024-11-25 14:12:17.084478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.247 [2024-11-25 14:12:17.096502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.247 [2024-11-25 14:12:17.096511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.247 [2024-11-25 14:12:17.108535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.247 [2024-11-25 14:12:17.108544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.247 [2024-11-25 14:12:17.120565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.247 [2024-11-25 14:12:17.120572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.247 [2024-11-25 14:12:17.132597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.247 [2024-11-25 14:12:17.132604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.247 [2024-11-25 14:12:17.144629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.247 [2024-11-25 14:12:17.144636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.247 [2024-11-25 14:12:17.156660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.247 [2024-11-25 14:12:17.156670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.247 [2024-11-25 14:12:17.168692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.247 [2024-11-25 14:12:17.168700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.247 [2024-11-25 14:12:17.180723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.247 [2024-11-25 14:12:17.180731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.247 [2024-11-25 14:12:17.192757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.247 [2024-11-25 14:12:17.192769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.247 [2024-11-25 14:12:17.204787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.247 [2024-11-25 14:12:17.204796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.247 [2024-11-25 14:12:17.216817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.247 [2024-11-25 14:12:17.216825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.247 [2024-11-25 14:12:17.228850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.247 [2024-11-25 14:12:17.228857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.247 [2024-11-25 14:12:17.240882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.247 [2024-11-25 14:12:17.240890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.247 [2024-11-25 14:12:17.252924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.247 [2024-11-25 14:12:17.252939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.247 Running I/O for 5 seconds... 00:14:12.247 [2024-11-25 14:12:17.264949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.247 [2024-11-25 14:12:17.264957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.247 [2024-11-25 14:12:17.279325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.247 [2024-11-25 14:12:17.279343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.247 [2024-11-25 14:12:17.292571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.247 [2024-11-25 14:12:17.292587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.247 [2024-11-25 14:12:17.305436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.247 [2024-11-25 14:12:17.305452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.247 [2024-11-25 14:12:17.318843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.247 [2024-11-25 14:12:17.318860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.247 [2024-11-25 14:12:17.331802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.247 [2024-11-25 14:12:17.331819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.510 [2024-11-25 14:12:17.345323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.510 [2024-11-25 14:12:17.345340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.510 [2024-11-25 14:12:17.359245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.510 [2024-11-25 14:12:17.359261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.510 [2024-11-25 14:12:17.372070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.510 [2024-11-25 14:12:17.372086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.510 [2024-11-25 14:12:17.384560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.510 [2024-11-25 14:12:17.384576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.510 [2024-11-25 14:12:17.398086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.510 [2024-11-25 14:12:17.398102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.510 [2024-11-25 14:12:17.411505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.510 [2024-11-25 14:12:17.411521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.510 [2024-11-25 14:12:17.424413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.510 [2024-11-25 14:12:17.424429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.510 [2024-11-25 14:12:17.437821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.510 [2024-11-25 14:12:17.437837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.510 [2024-11-25 14:12:17.450800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.510 [2024-11-25 14:12:17.450815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.510 [2024-11-25 14:12:17.464600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.510 [2024-11-25 14:12:17.464615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.510 [2024-11-25 14:12:17.477028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.510 [2024-11-25 14:12:17.477044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.510 [2024-11-25 14:12:17.490713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.510 [2024-11-25 14:12:17.490728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.510 [2024-11-25 14:12:17.504144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.510 [2024-11-25 14:12:17.504164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.510 [2024-11-25 14:12:17.517766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.510 [2024-11-25 14:12:17.517781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.510 [2024-11-25 14:12:17.530164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.510 [2024-11-25 14:12:17.530180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.510 [2024-11-25 14:12:17.543607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.510 [2024-11-25 14:12:17.543623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.510 [2024-11-25 14:12:17.556651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.510 [2024-11-25 14:12:17.556667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.510 [2024-11-25 14:12:17.570094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.510 [2024-11-25 14:12:17.570110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.511 [2024-11-25 14:12:17.583559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.511 [2024-11-25 14:12:17.583574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.511 [2024-11-25 14:12:17.596535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.511 [2024-11-25 14:12:17.596551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.771 [2024-11-25 14:12:17.609827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.771 [2024-11-25 14:12:17.609843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.771 [2024-11-25 14:12:17.623036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.771 [2024-11-25 14:12:17.623051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.771 [2024-11-25 14:12:17.636150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.771 [2024-11-25 14:12:17.636169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.771 [2024-11-25 14:12:17.648871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.771 [2024-11-25 14:12:17.648887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.771 [2024-11-25 14:12:17.662675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.771 [2024-11-25 14:12:17.662690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.771 [2024-11-25 14:12:17.675952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.771 [2024-11-25 14:12:17.675967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.771 [2024-11-25 14:12:17.689695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.771 [2024-11-25 14:12:17.689711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.771 [2024-11-25 14:12:17.702432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.771 [2024-11-25 14:12:17.702448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.771 [2024-11-25 14:12:17.715127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.771 [2024-11-25 14:12:17.715142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.771 [2024-11-25 14:12:17.728653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.771 [2024-11-25 14:12:17.728668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.771 [2024-11-25 14:12:17.741203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.771 [2024-11-25 14:12:17.741218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.771 [2024-11-25 14:12:17.754362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.771 [2024-11-25 14:12:17.754378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.771 [2024-11-25 14:12:17.767979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.771 [2024-11-25 14:12:17.767995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.771 [2024-11-25 14:12:17.781229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.771 [2024-11-25 14:12:17.781245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.771 [2024-11-25 14:12:17.793901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.771 [2024-11-25 14:12:17.793917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.771 [2024-11-25 14:12:17.806547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.771 [2024-11-25 14:12:17.806563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.771 [2024-11-25 14:12:17.820088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.771 [2024-11-25 14:12:17.820104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.771 [2024-11-25 14:12:17.832611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.771 [2024-11-25 14:12:17.832627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.771 [2024-11-25 14:12:17.845122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.771 [2024-11-25 14:12:17.845138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.771 [2024-11-25 14:12:17.857716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.771 [2024-11-25 14:12:17.857731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.032 [2024-11-25 14:12:17.870405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.032 [2024-11-25 14:12:17.870421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.032 [2024-11-25 14:12:17.883633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.032 [2024-11-25 14:12:17.883648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.032 [2024-11-25 14:12:17.896987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.032 [2024-11-25 14:12:17.897003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.032 [2024-11-25 14:12:17.910195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.032 [2024-11-25 14:12:17.910210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.032 [2024-11-25 14:12:17.922744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.032 [2024-11-25 14:12:17.922760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.032 [2024-11-25 14:12:17.935741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.032 [2024-11-25 14:12:17.935756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.032 [2024-11-25 14:12:17.949056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.032 [2024-11-25 14:12:17.949071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.032 [2024-11-25 14:12:17.962446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.032 [2024-11-25 14:12:17.962461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.032 [2024-11-25 14:12:17.975232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.032 [2024-11-25 14:12:17.975248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.032 [2024-11-25 14:12:17.988411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.032 [2024-11-25 14:12:17.988427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.032 [2024-11-25 14:12:18.001208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.032 [2024-11-25 14:12:18.001224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.032 [2024-11-25 14:12:18.014469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.032 [2024-11-25 14:12:18.014484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.032 [2024-11-25 14:12:18.026994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.032 [2024-11-25 14:12:18.027009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.032 [2024-11-25 14:12:18.039736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.032 [2024-11-25 14:12:18.039752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.032 [2024-11-25 14:12:18.052359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.032 [2024-11-25 14:12:18.052374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.032 [2024-11-25 14:12:18.065076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.032 [2024-11-25 14:12:18.065091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.032 [2024-11-25 14:12:18.078191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.032 [2024-11-25 14:12:18.078206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.032 [2024-11-25 14:12:18.091833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.032 [2024-11-25 14:12:18.091848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.032 [2024-11-25 14:12:18.104743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.032 [2024-11-25 14:12:18.104757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.032 [2024-11-25 14:12:18.117655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.032 [2024-11-25 14:12:18.117670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.294 [2024-11-25 14:12:18.130878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.294 [2024-11-25 14:12:18.130893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.294 [2024-11-25 14:12:18.144338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.294 [2024-11-25 14:12:18.144353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.294 [2024-11-25 14:12:18.157240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.294 [2024-11-25 14:12:18.157254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.294 [2024-11-25 14:12:18.170294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.294 [2024-11-25 14:12:18.170313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.294 [2024-11-25 14:12:18.183622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.294 [2024-11-25 14:12:18.183637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.294 [2024-11-25 14:12:18.197219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.294 [2024-11-25 14:12:18.197234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.294 [2024-11-25 14:12:18.210404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.294 [2024-11-25 14:12:18.210419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.294 [2024-11-25 14:12:18.224001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.294 [2024-11-25 14:12:18.224015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.294 [2024-11-25 14:12:18.237145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.294 [2024-11-25 14:12:18.237165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.294 [2024-11-25 14:12:18.249825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.294 [2024-11-25 14:12:18.249840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.294 [2024-11-25 14:12:18.262603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.294 [2024-11-25 14:12:18.262618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.294 19067.00 IOPS, 148.96 MiB/s [2024-11-25T13:12:18.384Z] [2024-11-25 14:12:18.275543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.294 [2024-11-25 14:12:18.275558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.294 [2024-11-25 14:12:18.288295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.294 [2024-11-25 14:12:18.288310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.294 [2024-11-25 14:12:18.301606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.294 [2024-11-25 14:12:18.301621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.294 [2024-11-25 14:12:18.314892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.294 [2024-11-25 14:12:18.314907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.294 [2024-11-25 14:12:18.328721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.294 [2024-11-25 14:12:18.328736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.294 [2024-11-25 14:12:18.341807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.294 [2024-11-25 14:12:18.341822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.294 [2024-11-25 14:12:18.355144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.294 [2024-11-25 14:12:18.355164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.294 [2024-11-25 14:12:18.368486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.294 [2024-11-25 14:12:18.368501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.294 [2024-11-25 14:12:18.382152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.294 [2024-11-25 14:12:18.382171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.555 [2024-11-25 14:12:18.395162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.555 [2024-11-25 14:12:18.395177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.555 [2024-11-25 14:12:18.408489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.555 [2024-11-25 14:12:18.408504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.555 [2024-11-25 14:12:18.421126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.555 [2024-11-25 14:12:18.421144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.555 [2024-11-25 14:12:18.434236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.555 [2024-11-25 14:12:18.434250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.555 [2024-11-25 14:12:18.447164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.555 [2024-11-25 14:12:18.447178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.555 [2024-11-25 14:12:18.460414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.555 [2024-11-25 14:12:18.460428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.555 [2024-11-25 14:12:18.473801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.555 [2024-11-25 14:12:18.473816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.555 [2024-11-25 14:12:18.487512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.555 [2024-11-25 14:12:18.487527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.555 [2024-11-25 14:12:18.500534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.555 [2024-11-25 14:12:18.500549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.555 [2024-11-25 14:12:18.513584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.555 [2024-11-25 14:12:18.513600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.555 [2024-11-25 14:12:18.526364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.555 [2024-11-25 14:12:18.526379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.555 [2024-11-25 14:12:18.539189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.555 [2024-11-25 14:12:18.539205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.555 [2024-11-25 14:12:18.552492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.555 [2024-11-25 14:12:18.552506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.555 [2024-11-25 14:12:18.564864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.555 [2024-11-25 14:12:18.564878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.555 [2024-11-25 14:12:18.577339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.555 [2024-11-25 14:12:18.577353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.555 [2024-11-25 14:12:18.590194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.555 [2024-11-25 14:12:18.590209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.555 [2024-11-25 14:12:18.603178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.555 [2024-11-25 14:12:18.603194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.555 [2024-11-25 14:12:18.616713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.555 [2024-11-25 14:12:18.616728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.555 [2024-11-25 14:12:18.629322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.555 [2024-11-25 14:12:18.629338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.555 [2024-11-25 14:12:18.642584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.556 [2024-11-25 14:12:18.642599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.816 [2024-11-25 14:12:18.655372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.816 [2024-11-25 14:12:18.655387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.816 [2024-11-25 14:12:18.668120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.816 [2024-11-25 14:12:18.668139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.816 [2024-11-25 14:12:18.681137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.816 [2024-11-25 14:12:18.681152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.816 [2024-11-25 14:12:18.694198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.816 [2024-11-25 14:12:18.694213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.816 [2024-11-25 14:12:18.706955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.816 [2024-11-25 14:12:18.706971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.816 [2024-11-25 14:12:18.719732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.816 [2024-11-25 14:12:18.719747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.816 [2024-11-25 14:12:18.732871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.816 [2024-11-25 14:12:18.732886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.816 [2024-11-25 14:12:18.746064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.816 [2024-11-25 14:12:18.746079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.816 [2024-11-25 14:12:18.759182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.816 [2024-11-25 14:12:18.759197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.816 [2024-11-25 14:12:18.771669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.816 [2024-11-25 14:12:18.771684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.816 [2024-11-25 14:12:18.784894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.816 [2024-11-25 14:12:18.784909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.816 [2024-11-25 14:12:18.797538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.816 [2024-11-25 14:12:18.797553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.816 [2024-11-25 14:12:18.811042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.816 [2024-11-25 14:12:18.811056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.816 [2024-11-25 14:12:18.823513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.816 [2024-11-25 14:12:18.823528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.816 [2024-11-25 14:12:18.836113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.816 [2024-11-25 14:12:18.836128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.816 [2024-11-25 14:12:18.849522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.816 [2024-11-25 14:12:18.849537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.816 [2024-11-25 14:12:18.863186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.816 [2024-11-25 14:12:18.863201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.816 [2024-11-25 14:12:18.876891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.816 [2024-11-25 14:12:18.876906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.816 [2024-11-25 14:12:18.890502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.816 [2024-11-25 14:12:18.890517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.816 [2024-11-25 14:12:18.903179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.816 [2024-11-25 14:12:18.903194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.077 [2024-11-25 14:12:18.916625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.077 [2024-11-25 14:12:18.916644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.077 [2024-11-25 14:12:18.929938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.077 [2024-11-25 14:12:18.929953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.077 [2024-11-25 14:12:18.943338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.077 [2024-11-25 14:12:18.943353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.077 [2024-11-25 14:12:18.956789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.077 [2024-11-25 14:12:18.956804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.077 [2024-11-25 14:12:18.970022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.077 [2024-11-25 14:12:18.970037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.077 [2024-11-25 14:12:18.982567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.077 [2024-11-25 14:12:18.982582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.077 [2024-11-25 14:12:18.994987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.077 [2024-11-25 14:12:18.995001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.077 [2024-11-25 14:12:19.008413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.077 [2024-11-25 14:12:19.008428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.077 [2024-11-25 14:12:19.021103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.077 [2024-11-25 14:12:19.021117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.077 [2024-11-25 14:12:19.034602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.077 [2024-11-25 14:12:19.034617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.077 [2024-11-25 14:12:19.048055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.077 [2024-11-25 14:12:19.048070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.077 [2024-11-25 14:12:19.061951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.077 [2024-11-25 14:12:19.061966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.077 [2024-11-25 14:12:19.074614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.077 [2024-11-25 14:12:19.074630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.077 [2024-11-25 14:12:19.087218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.077 [2024-11-25 14:12:19.087234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.077 [2024-11-25 14:12:19.099943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.077 [2024-11-25 14:12:19.099959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.077 [2024-11-25 14:12:19.113130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.077 [2024-11-25 14:12:19.113146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.077 [2024-11-25 14:12:19.126485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.077 [2024-11-25 14:12:19.126501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.077 [2024-11-25 14:12:19.140130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.077 [2024-11-25 14:12:19.140145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.077 [2024-11-25 14:12:19.152946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.077 [2024-11-25 14:12:19.152962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.077 [2024-11-25 14:12:19.166075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.077 [2024-11-25 14:12:19.166090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.348 [2024-11-25 14:12:19.178626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.348 [2024-11-25 14:12:19.178641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.348 [2024-11-25 14:12:19.190832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.348 [2024-11-25 14:12:19.190848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.348 [2024-11-25 14:12:19.204609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.348 [2024-11-25 14:12:19.204625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.348 [2024-11-25 14:12:19.217119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.348 [2024-11-25 14:12:19.217135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.348 [2024-11-25 14:12:19.230543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.348 [2024-11-25 14:12:19.230559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.348 [2024-11-25 14:12:19.242849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.348 [2024-11-25 14:12:19.242864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.348 [2024-11-25 14:12:19.256469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.348 [2024-11-25 14:12:19.256485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.348 [2024-11-25 14:12:19.269673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.348 [2024-11-25 14:12:19.269689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.348 19181.50 IOPS, 149.86 MiB/s [2024-11-25T13:12:19.438Z] [2024-11-25 14:12:19.283153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.348 [2024-11-25 14:12:19.283174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.348 [2024-11-25 14:12:19.296765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.348 [2024-11-25 14:12:19.296780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.348 [2024-11-25 14:12:19.309719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.349 [2024-11-25 14:12:19.309735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.349 [2024-11-25 14:12:19.322886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.349 [2024-11-25 14:12:19.322902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.349 [2024-11-25 14:12:19.335461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.349 [2024-11-25 14:12:19.335477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.349 [2024-11-25 14:12:19.347872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.349 [2024-11-25 14:12:19.347888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.349 [2024-11-25 14:12:19.360925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.349 [2024-11-25 14:12:19.360941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.349 [2024-11-25 14:12:19.374025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.349 [2024-11-25 14:12:19.374040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.349 [2024-11-25 14:12:19.386881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.349 [2024-11-25 14:12:19.386896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.349 [2024-11-25 14:12:19.400019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.349 [2024-11-25 14:12:19.400035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.349 [2024-11-25 14:12:19.413275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.349 [2024-11-25 14:12:19.413290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.349 [2024-11-25 14:12:19.426465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.349 [2024-11-25 14:12:19.426480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.609 [2024-11-25 14:12:19.439676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.609 [2024-11-25 14:12:19.439693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.609 [2024-11-25 14:12:19.452741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.609 [2024-11-25 14:12:19.452757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.609 [2024-11-25 14:12:19.465411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.609 [2024-11-25 14:12:19.465427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.609 [2024-11-25 14:12:19.479005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.609 [2024-11-25 14:12:19.479020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.609 [2024-11-25 14:12:19.491700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.609 [2024-11-25 14:12:19.491715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.609 [2024-11-25 14:12:19.505272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.609 [2024-11-25 14:12:19.505287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.609 [2024-11-25 14:12:19.518408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.609 [2024-11-25 14:12:19.518423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.609 [2024-11-25 14:12:19.531395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.609 [2024-11-25 14:12:19.531410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.609 [2024-11-25 14:12:19.544493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.609 [2024-11-25 14:12:19.544509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.609 [2024-11-25 14:12:19.557094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.609 [2024-11-25 14:12:19.557111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.609 [2024-11-25 14:12:19.570900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.609 [2024-11-25 14:12:19.570916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.609 [2024-11-25 14:12:19.584154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.609 [2024-11-25 14:12:19.584175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.609 [2024-11-25 14:12:19.597529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.609 [2024-11-25 14:12:19.597544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.609 [2024-11-25 14:12:19.610854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.609 [2024-11-25 14:12:19.610869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.609 [2024-11-25 14:12:19.623644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.609 [2024-11-25 14:12:19.623659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.609 [2024-11-25 14:12:19.637031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.609 [2024-11-25 14:12:19.637046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.609 [2024-11-25 14:12:19.650079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.609 [2024-11-25 14:12:19.650099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.609 [2024-11-25 14:12:19.662637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.609 [2024-11-25 14:12:19.662652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.610 [2024-11-25 14:12:19.675120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.610 [2024-11-25 14:12:19.675135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.610 [2024-11-25 14:12:19.687472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.610 [2024-11-25 14:12:19.687487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.870 [2024-11-25 14:12:19.700876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.870 [2024-11-25 14:12:19.700891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.870 [2024-11-25 14:12:19.713436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.870 [2024-11-25 14:12:19.713451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.870 [2024-11-25 14:12:19.726610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.870 [2024-11-25 14:12:19.726626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.870 [2024-11-25 14:12:19.739752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.870 [2024-11-25 14:12:19.739768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.870 [2024-11-25 14:12:19.752335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.870 [2024-11-25 14:12:19.752350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.870 [2024-11-25 14:12:19.765412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.870 [2024-11-25 14:12:19.765428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.870 [2024-11-25 14:12:19.778261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.870 [2024-11-25 14:12:19.778276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.870 [2024-11-25 14:12:19.790970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.870 [2024-11-25 14:12:19.790985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.870 [2024-11-25 14:12:19.803388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.870 [2024-11-25 14:12:19.803402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.870 [2024-11-25 14:12:19.816106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.870 [2024-11-25 14:12:19.816121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.870 [2024-11-25 14:12:19.829392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.870 [2024-11-25 14:12:19.829407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.870 [2024-11-25 14:12:19.843166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.870 [2024-11-25 14:12:19.843181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.870 [2024-11-25 14:12:19.856335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.870 [2024-11-25 14:12:19.856350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.870 [2024-11-25 14:12:19.869894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.870 [2024-11-25 14:12:19.869909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.870 [2024-11-25 14:12:19.883661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.870 [2024-11-25 14:12:19.883677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.870 [2024-11-25 14:12:19.897230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.870 [2024-11-25 14:12:19.897249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.870 [2024-11-25 14:12:19.910538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.870 [2024-11-25 14:12:19.910553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.870 [2024-11-25 14:12:19.923433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.870 [2024-11-25 14:12:19.923447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.870 [2024-11-25 14:12:19.936437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.870 [2024-11-25 14:12:19.936452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.870 [2024-11-25 14:12:19.949161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.870 [2024-11-25 14:12:19.949176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.131 [2024-11-25 14:12:19.961469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.131 [2024-11-25 14:12:19.961485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.131 [2024-11-25 14:12:19.974225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.131 [2024-11-25 14:12:19.974240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.131 [2024-11-25 14:12:19.987414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.131 [2024-11-25 14:12:19.987429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.131 [2024-11-25 14:12:20.000260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.131 [2024-11-25 14:12:20.000275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.131 [2024-11-25 14:12:20.013461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.131 [2024-11-25 14:12:20.013477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.131 [2024-11-25 14:12:20.026506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.131 [2024-11-25 14:12:20.026521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.131 [2024-11-25 14:12:20.040131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.131 [2024-11-25 14:12:20.040146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.131 [2024-11-25 14:12:20.053736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.131 [2024-11-25 14:12:20.053751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.131 [2024-11-25 14:12:20.066530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.131 [2024-11-25 14:12:20.066545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.131 [2024-11-25 14:12:20.079466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.131 [2024-11-25 14:12:20.079482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.131 [2024-11-25 14:12:20.092910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.131 [2024-11-25 14:12:20.092926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.131 [2024-11-25 14:12:20.106047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.131 [2024-11-25 14:12:20.106062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.131 [2024-11-25 14:12:20.118293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.131 [2024-11-25 14:12:20.118309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.131 [2024-11-25 14:12:20.131282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.131 [2024-11-25 14:12:20.131297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.131 [2024-11-25 14:12:20.144442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.131 [2024-11-25 14:12:20.144462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.131 [2024-11-25 14:12:20.157465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.131 [2024-11-25 14:12:20.157480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.131 [2024-11-25 14:12:20.170121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.131 [2024-11-25 14:12:20.170136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.131 [2024-11-25 14:12:20.182440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.131 [2024-11-25 14:12:20.182456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.131 [2024-11-25 14:12:20.195718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.131 [2024-11-25 14:12:20.195733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.131 [2024-11-25 14:12:20.209359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.131 [2024-11-25 14:12:20.209374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.392 [2024-11-25 14:12:20.222262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.392 [2024-11-25 14:12:20.222277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.392 [2024-11-25 14:12:20.235881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.392 [2024-11-25 14:12:20.235896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.392 [2024-11-25 14:12:20.249499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.392 [2024-11-25 14:12:20.249514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.392 [2024-11-25 14:12:20.263005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.392 [2024-11-25 14:12:20.263020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.392 19208.33 IOPS, 150.07 MiB/s [2024-11-25T13:12:20.482Z] [2024-11-25 14:12:20.276326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.392 [2024-11-25 14:12:20.276341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.392 [2024-11-25 14:12:20.289390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.392 [2024-11-25 14:12:20.289405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.392 [2024-11-25 14:12:20.302113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.392 [2024-11-25 14:12:20.302128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.392 [2024-11-25 14:12:20.314668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.392 [2024-11-25 14:12:20.314683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.392 [2024-11-25 14:12:20.328131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.392 [2024-11-25 14:12:20.328146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.392 [2024-11-25 14:12:20.340568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.392 [2024-11-25 14:12:20.340582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.392 [2024-11-25 14:12:20.353709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.392 [2024-11-25 14:12:20.353724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.392 [2024-11-25 14:12:20.367197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.392 [2024-11-25 14:12:20.367211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.392 [2024-11-25 14:12:20.380197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.392 [2024-11-25 14:12:20.380212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.392 [2024-11-25 14:12:20.393479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.392 [2024-11-25 14:12:20.393495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.392 [2024-11-25 14:12:20.405898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.392 [2024-11-25 14:12:20.405912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.392 [2024-11-25 14:12:20.418324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.392 [2024-11-25 14:12:20.418339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.392 [2024-11-25 14:12:20.431223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.392 [2024-11-25 14:12:20.431238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.392 [2024-11-25 14:12:20.443980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.392 [2024-11-25 14:12:20.443994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.392 [2024-11-25 14:12:20.457243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.392 [2024-11-25 14:12:20.457258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.392 [2024-11-25 14:12:20.470611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.392 [2024-11-25 14:12:20.470625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.653 [2024-11-25 14:12:20.483331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.653 [2024-11-25 14:12:20.483346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.653 [2024-11-25 14:12:20.496530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.653 [2024-11-25 14:12:20.496544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.653 [2024-11-25 14:12:20.509772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.653 [2024-11-25 14:12:20.509786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.653 [2024-11-25 14:12:20.523293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.653 [2024-11-25 14:12:20.523308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.653 [2024-11-25 14:12:20.536213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.653 [2024-11-25 14:12:20.536228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.653 [2024-11-25 14:12:20.549283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.653 [2024-11-25 14:12:20.549298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.653 [2024-11-25 14:12:20.562658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.653 [2024-11-25 14:12:20.562673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.653 [2024-11-25 14:12:20.575864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.653 [2024-11-25 14:12:20.575879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.653 [2024-11-25 14:12:20.588756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.653 [2024-11-25 14:12:20.588771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.653 [2024-11-25 14:12:20.601829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.653 [2024-11-25 14:12:20.601844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.653 [2024-11-25 14:12:20.615383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.653 [2024-11-25 14:12:20.615398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.653 [2024-11-25 14:12:20.628610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.653 [2024-11-25 14:12:20.628624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.653 [2024-11-25 14:12:20.642300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.653 [2024-11-25 14:12:20.642315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.653 [2024-11-25 14:12:20.655526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.654 [2024-11-25 14:12:20.655541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.654 [2024-11-25 14:12:20.668601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.654 [2024-11-25 14:12:20.668616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.654 [2024-11-25 14:12:20.682101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.654 [2024-11-25 14:12:20.682117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.654 [2024-11-25 14:12:20.694914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.654 [2024-11-25 14:12:20.694929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.654 [2024-11-25 14:12:20.707451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.654 [2024-11-25 14:12:20.707466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.654 [2024-11-25 14:12:20.720501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.654 [2024-11-25 14:12:20.720516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.654 [2024-11-25 14:12:20.733022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.654 [2024-11-25 14:12:20.733037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.915 [2024-11-25 14:12:20.745798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.915 [2024-11-25 14:12:20.745813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.915 [2024-11-25 14:12:20.759571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.915 [2024-11-25 14:12:20.759586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.915 [2024-11-25 14:12:20.773189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.915 [2024-11-25 14:12:20.773204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.915 [2024-11-25 14:12:20.786347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.915 [2024-11-25 14:12:20.786362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.915 [2024-11-25 14:12:20.799137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.916 [2024-11-25 14:12:20.799151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.916 [2024-11-25 14:12:20.812305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.916 [2024-11-25 14:12:20.812320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.916 [2024-11-25 14:12:20.825766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.916 [2024-11-25 14:12:20.825781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.916 [2024-11-25 14:12:20.838982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.916 [2024-11-25 14:12:20.838997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.916 [2024-11-25 14:12:20.851800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.916 [2024-11-25 14:12:20.851816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.916 [2024-11-25 14:12:20.865202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.916 [2024-11-25 14:12:20.865217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.916 [2024-11-25 14:12:20.878846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.916 [2024-11-25 14:12:20.878861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.916 [2024-11-25 14:12:20.892193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.916 [2024-11-25 14:12:20.892208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.916 [2024-11-25 14:12:20.905631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.916 [2024-11-25 14:12:20.905647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.916 [2024-11-25 14:12:20.918862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.916 [2024-11-25 14:12:20.918877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.916 [2024-11-25 14:12:20.932087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.916 [2024-11-25 14:12:20.932103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.916 [2024-11-25 14:12:20.945627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.916 [2024-11-25 14:12:20.945643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.916 [2024-11-25 14:12:20.959376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.916 [2024-11-25 14:12:20.959392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.916 [2024-11-25 14:12:20.973046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.916 [2024-11-25 14:12:20.973061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.916 [2024-11-25 14:12:20.985933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.916 [2024-11-25 14:12:20.985948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:15.916 [2024-11-25 14:12:20.999183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:15.916 [2024-11-25 14:12:20.999198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.177 [2024-11-25 14:12:21.012550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.177 [2024-11-25 14:12:21.012566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.177 [2024-11-25 14:12:21.025014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.177 [2024-11-25 14:12:21.025029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.177 [2024-11-25 14:12:21.038424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.177 [2024-11-25 14:12:21.038440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.177 [2024-11-25 14:12:21.051030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.177 [2024-11-25 14:12:21.051046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.177 [2024-11-25 14:12:21.064066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.177 [2024-11-25 14:12:21.064081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.177 [2024-11-25 14:12:21.076563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.177 [2024-11-25 14:12:21.076578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.177 [2024-11-25 14:12:21.089455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.177 [2024-11-25 14:12:21.089470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.177 [2024-11-25 14:12:21.102395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.177 [2024-11-25 14:12:21.102410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.177 [2024-11-25 14:12:21.115484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.177 [2024-11-25 14:12:21.115499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.178 [2024-11-25 14:12:21.128808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.178 [2024-11-25 14:12:21.128824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.178 [2024-11-25 14:12:21.141409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.178 [2024-11-25 14:12:21.141425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.178 [2024-11-25 14:12:21.154993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.178 [2024-11-25 14:12:21.155009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.178 [2024-11-25 14:12:21.168688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.178 [2024-11-25 14:12:21.168703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.178 [2024-11-25 14:12:21.181573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.178 [2024-11-25 14:12:21.181588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.178 [2024-11-25 14:12:21.195218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.178 [2024-11-25 14:12:21.195234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.178 [2024-11-25 14:12:21.208732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.178 [2024-11-25 14:12:21.208747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.178 [2024-11-25 14:12:21.222384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.178 [2024-11-25 14:12:21.222399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.178 [2024-11-25 14:12:21.235156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.178 [2024-11-25 14:12:21.235176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.178 [2024-11-25 14:12:21.248200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.178 [2024-11-25 14:12:21.248215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.178 [2024-11-25 14:12:21.261226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.178 [2024-11-25 14:12:21.261241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.438 [2024-11-25 14:12:21.273958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.438 [2024-11-25 14:12:21.273974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.438 19228.25 IOPS, 150.22 MiB/s [2024-11-25T13:12:21.528Z] [2024-11-25 14:12:21.286535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.438 [2024-11-25 14:12:21.286550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.438 [2024-11-25 14:12:21.300054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.438 [2024-11-25 14:12:21.300069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.438 [2024-11-25 14:12:21.312922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.438 [2024-11-25 14:12:21.312937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.438 [2024-11-25 14:12:21.325584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.438 [2024-11-25 14:12:21.325600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.438 [2024-11-25 14:12:21.338751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.438 [2024-11-25 14:12:21.338767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.438 [2024-11-25 14:12:21.352185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.438 [2024-11-25 14:12:21.352201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.438 [2024-11-25 14:12:21.365507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.438 [2024-11-25 14:12:21.365522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.438 [2024-11-25 14:12:21.378740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.438 [2024-11-25 14:12:21.378762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.438 [2024-11-25 14:12:21.392190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.438 [2024-11-25 14:12:21.392205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.438 [2024-11-25 14:12:21.405918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.438 [2024-11-25 14:12:21.405933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.438 [2024-11-25 14:12:21.418456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.438 [2024-11-25 14:12:21.418471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.439 [2024-11-25 14:12:21.431305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.439 [2024-11-25 14:12:21.431320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.439 [2024-11-25 14:12:21.444655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.439 [2024-11-25 14:12:21.444670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.439 [2024-11-25 14:12:21.457735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.439 [2024-11-25 14:12:21.457750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.439 [2024-11-25 14:12:21.471382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.439 [2024-11-25 14:12:21.471397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.439 [2024-11-25 14:12:21.484480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.439 [2024-11-25 14:12:21.484495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.439 [2024-11-25 14:12:21.498040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.439 [2024-11-25 14:12:21.498055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.439 [2024-11-25 14:12:21.511627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.439 [2024-11-25 14:12:21.511641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.439 [2024-11-25 14:12:21.525102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.439 [2024-11-25 14:12:21.525117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.700 [2024-11-25 14:12:21.538376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.700 [2024-11-25 14:12:21.538391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.700 [2024-11-25 14:12:21.551966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.700 [2024-11-25 14:12:21.551981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.700 [2024-11-25 14:12:21.565301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.700 [2024-11-25 14:12:21.565316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.700 [2024-11-25 14:12:21.578765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.700 [2024-11-25 14:12:21.578779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.700 [2024-11-25 14:12:21.592058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.700 [2024-11-25 14:12:21.592073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.700 [2024-11-25 14:12:21.605670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.700 [2024-11-25 14:12:21.605685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.700 [2024-11-25 14:12:21.618811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.700 [2024-11-25 14:12:21.618826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.700 [2024-11-25 14:12:21.631712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.700 [2024-11-25 14:12:21.631731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.700 [2024-11-25 14:12:21.645300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.700 [2024-11-25 14:12:21.645315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.700 [2024-11-25 14:12:21.658963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.700 [2024-11-25 14:12:21.658978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.700 [2024-11-25 14:12:21.672507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.700 [2024-11-25 14:12:21.672521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.700 [2024-11-25 14:12:21.685987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.700 [2024-11-25 14:12:21.686002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.700 [2024-11-25 14:12:21.699377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.700 [2024-11-25 14:12:21.699392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.700 [2024-11-25 14:12:21.712750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.700 [2024-11-25 14:12:21.712765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.700 [2024-11-25 14:12:21.726057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.700 [2024-11-25 14:12:21.726072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.700 [2024-11-25 14:12:21.738591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.700 [2024-11-25 14:12:21.738606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.700 [2024-11-25 14:12:21.752170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.700 [2024-11-25 14:12:21.752184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.700 [2024-11-25 14:12:21.765609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.700 [2024-11-25 14:12:21.765625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.701 [2024-11-25 14:12:21.778577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.701 [2024-11-25 14:12:21.778592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.962 [2024-11-25 14:12:21.791827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.962 [2024-11-25 14:12:21.791842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.962 [2024-11-25 14:12:21.804768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.962 [2024-11-25 14:12:21.804783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.962 [2024-11-25 14:12:21.817381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.962 [2024-11-25 14:12:21.817396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.962 [2024-11-25 14:12:21.831520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.962 [2024-11-25 14:12:21.831534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.962 [2024-11-25 14:12:21.844003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.962 [2024-11-25 14:12:21.844018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.962 [2024-11-25 14:12:21.856705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.962 [2024-11-25 14:12:21.856719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.962 [2024-11-25 14:12:21.869983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.962 [2024-11-25 14:12:21.869998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.962 [2024-11-25 14:12:21.883315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.962 [2024-11-25 14:12:21.883333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.962 [2024-11-25 14:12:21.896998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.962 [2024-11-25 14:12:21.897013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.962 [2024-11-25 14:12:21.909955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.962 [2024-11-25 14:12:21.909970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.962 [2024-11-25 14:12:21.923028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.962 [2024-11-25 14:12:21.923043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.962 [2024-11-25 14:12:21.936410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.962 [2024-11-25 14:12:21.936424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.962 [2024-11-25 14:12:21.950183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.962 [2024-11-25 14:12:21.950198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.962 [2024-11-25 14:12:21.962682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.962 [2024-11-25 14:12:21.962696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.962 [2024-11-25 14:12:21.975208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.962 [2024-11-25 14:12:21.975223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.962 [2024-11-25 14:12:21.987900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.962 [2024-11-25 14:12:21.987914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.962 [2024-11-25 14:12:22.001207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.962 [2024-11-25 14:12:22.001222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.962 [2024-11-25 14:12:22.014612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.962 [2024-11-25 14:12:22.014627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.962 [2024-11-25 14:12:22.027359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.962 [2024-11-25 14:12:22.027374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:16.962 [2024-11-25 14:12:22.041045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:16.962 [2024-11-25 14:12:22.041059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.223 [2024-11-25 14:12:22.054322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.223 [2024-11-25 14:12:22.054338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.223 [2024-11-25 14:12:22.067687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.223 [2024-11-25 14:12:22.067701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.223 [2024-11-25 14:12:22.080256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.223 [2024-11-25 14:12:22.080270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.223 [2024-11-25 14:12:22.093521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.223 [2024-11-25 14:12:22.093536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.223 [2024-11-25 14:12:22.106760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.223 [2024-11-25 14:12:22.106775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.223 [2024-11-25 14:12:22.120125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.223 [2024-11-25 14:12:22.120140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.223 [2024-11-25 14:12:22.133016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.223 [2024-11-25 14:12:22.133031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.223 [2024-11-25 14:12:22.146387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.223 [2024-11-25 14:12:22.146402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.223 [2024-11-25 14:12:22.159693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.223 [2024-11-25 14:12:22.159708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.223 [2024-11-25 14:12:22.172831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.223 [2024-11-25 14:12:22.172846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.223 [2024-11-25 14:12:22.185595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.224 [2024-11-25 14:12:22.185610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.224 [2024-11-25 14:12:22.199011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.224 [2024-11-25 14:12:22.199026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.224 [2024-11-25 14:12:22.212153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.224 [2024-11-25 14:12:22.212171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.224 [2024-11-25 14:12:22.224623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.224 [2024-11-25 14:12:22.224639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.224 [2024-11-25 14:12:22.238261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.224 [2024-11-25 14:12:22.238275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.224 [2024-11-25 14:12:22.251349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.224 [2024-11-25 14:12:22.251364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.224 [2024-11-25 14:12:22.264753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.224 [2024-11-25 14:12:22.264768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.224 [2024-11-25 14:12:22.277484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.224 [2024-11-25 14:12:22.277498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.224 19240.40 IOPS, 150.32 MiB/s 00:14:17.224 Latency(us) 00:14:17.224 [2024-11-25T13:12:22.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.224 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:17.224 Nvme1n1 : 5.01 19243.61 150.34 0.00 0.00 6646.41 2867.20 13271.04 00:14:17.224 [2024-11-25T13:12:22.314Z] =================================================================================================================== 00:14:17.224 [2024-11-25T13:12:22.314Z] Total : 19243.61 150.34 0.00 0.00 6646.41 2867.20 13271.04 00:14:17.224 [2024-11-25 14:12:22.287259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.224 [2024-11-25 14:12:22.287273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.224 [2024-11-25 14:12:22.299290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.224 [2024-11-25 14:12:22.299303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.224 [2024-11-25 14:12:22.311324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.224 [2024-11-25 14:12:22.311337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.484 [2024-11-25 14:12:22.323354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.484 [2024-11-25 14:12:22.323366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.484 [2024-11-25 14:12:22.335383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.484 [2024-11-25 14:12:22.335394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.484 [2024-11-25 14:12:22.347411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.484 [2024-11-25 14:12:22.347422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.484 [2024-11-25 14:12:22.359441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.484 [2024-11-25 14:12:22.359449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.484 [2024-11-25 14:12:22.371473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.485 [2024-11-25 14:12:22.371482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.485 [2024-11-25 14:12:22.383502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:17.485 [2024-11-25 14:12:22.383510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:17.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3280199) - No such process 00:14:17.485 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3280199 00:14:17.485 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.485 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.485 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:17.485 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.485 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:17.485 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.485 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:17.485 delay0 00:14:17.485 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.485 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:17.485 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.485 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:17.485 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.485 14:12:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:17.485 [2024-11-25 14:12:22.508764] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:25.649 Initializing NVMe Controllers 00:14:25.649 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:25.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:25.649 Initialization complete. Launching workers. 00:14:25.649 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 264, failed: 24878 00:14:25.649 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 25042, failed to submit 100 00:14:25.649 success 24957, unsuccessful 85, failed 0 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:25.649 rmmod nvme_tcp 00:14:25.649 rmmod nvme_fabrics 00:14:25.649 rmmod nvme_keyring 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3277364 ']' 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3277364 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3277364 ']' 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3277364 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3277364 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3277364' 00:14:25.649 killing process with pid 3277364 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3277364 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3277364 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:25.649 14:12:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.031 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:27.031 00:14:27.031 real 0m34.671s 00:14:27.031 user 0m45.827s 00:14:27.031 sys 0m11.866s 00:14:27.031 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.031 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:27.031 ************************************ 00:14:27.031 END TEST nvmf_zcopy 00:14:27.031 ************************************ 00:14:27.031 14:12:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:27.031 14:12:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:27.031 14:12:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:27.031 14:12:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:27.031 ************************************ 00:14:27.031 START TEST nvmf_nmic 00:14:27.031 ************************************ 00:14:27.031 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:27.294 * Looking for test storage... 00:14:27.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:27.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.294 --rc genhtml_branch_coverage=1 00:14:27.294 --rc genhtml_function_coverage=1 00:14:27.294 --rc genhtml_legend=1 00:14:27.294 --rc geninfo_all_blocks=1 00:14:27.294 --rc geninfo_unexecuted_blocks=1 00:14:27.294 00:14:27.294 ' 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:27.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.294 --rc genhtml_branch_coverage=1 00:14:27.294 --rc genhtml_function_coverage=1 00:14:27.294 --rc genhtml_legend=1 00:14:27.294 --rc geninfo_all_blocks=1 00:14:27.294 --rc geninfo_unexecuted_blocks=1 00:14:27.294 00:14:27.294 ' 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:27.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.294 --rc genhtml_branch_coverage=1 00:14:27.294 --rc genhtml_function_coverage=1 00:14:27.294 --rc genhtml_legend=1 00:14:27.294 --rc geninfo_all_blocks=1 00:14:27.294 --rc geninfo_unexecuted_blocks=1 00:14:27.294 00:14:27.294 ' 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:27.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.294 --rc genhtml_branch_coverage=1 00:14:27.294 --rc genhtml_function_coverage=1 00:14:27.294 --rc genhtml_legend=1 00:14:27.294 --rc geninfo_all_blocks=1 00:14:27.294 --rc geninfo_unexecuted_blocks=1 00:14:27.294 00:14:27.294 ' 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.294 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:27.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:14:27.295 14:12:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:35.440 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:35.440 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:35.440 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:35.440 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:35.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:14:35.440 00:14:35.440 --- 10.0.0.2 ping statistics --- 00:14:35.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.440 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:14:35.440 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:35.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:14:35.440 00:14:35.440 --- 10.0.0.1 ping statistics --- 00:14:35.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.441 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:14:35.441 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.441 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:14:35.441 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:35.441 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.441 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:35.441 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:35.441 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.441 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:35.441 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:35.441 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:35.441 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:35.441 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:35.441 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:35.441 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3286893 00:14:35.441 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3286893 00:14:35.441 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:35.441 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3286893 ']' 00:14:35.441 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.441 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:35.441 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.441 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:35.441 14:12:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:35.441 [2024-11-25 14:12:39.863508] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:14:35.441 [2024-11-25 14:12:39.863574] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.441 [2024-11-25 14:12:39.962634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:35.441 [2024-11-25 14:12:40.019858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.441 [2024-11-25 14:12:40.019915] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.441 [2024-11-25 14:12:40.019924] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:35.441 [2024-11-25 14:12:40.019932] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:35.441 [2024-11-25 14:12:40.019939] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.441 [2024-11-25 14:12:40.021870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.441 [2024-11-25 14:12:40.022030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.441 [2024-11-25 14:12:40.022237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:35.441 [2024-11-25 14:12:40.022238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.703 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:35.703 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:14:35.703 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:35.703 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:35.703 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:35.703 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:35.703 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:35.703 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.703 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:35.703 [2024-11-25 14:12:40.745536] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:35.703 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.703 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:35.703 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.703 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:35.964 Malloc0 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:35.964 [2024-11-25 14:12:40.827524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:35.964 test case1: single bdev can't be used in multiple subsystems 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:35.964 [2024-11-25 14:12:40.863405] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:35.964 [2024-11-25 14:12:40.863434] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:35.964 [2024-11-25 14:12:40.863443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.964 request: 00:14:35.964 { 00:14:35.964 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:35.964 "namespace": { 00:14:35.964 "bdev_name": "Malloc0", 00:14:35.964 "no_auto_visible": false 00:14:35.964 }, 00:14:35.964 "method": "nvmf_subsystem_add_ns", 00:14:35.964 "req_id": 1 00:14:35.964 } 00:14:35.964 Got JSON-RPC error response 00:14:35.964 response: 00:14:35.964 { 00:14:35.964 "code": -32602, 00:14:35.964 "message": "Invalid parameters" 00:14:35.964 } 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:35.964 Adding namespace failed - expected result. 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:35.964 test case2: host connect to nvmf target in multiple paths 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:35.964 [2024-11-25 14:12:40.875586] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.964 14:12:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:37.370 14:12:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:14:39.284 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:39.284 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:14:39.284 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:39.284 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:39.284 14:12:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:14:41.198 14:12:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:41.198 14:12:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:41.198 14:12:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:41.198 14:12:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:41.198 14:12:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:41.198 14:12:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:14:41.198 14:12:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:41.198 [global] 00:14:41.198 thread=1 00:14:41.198 invalidate=1 00:14:41.198 rw=write 00:14:41.198 time_based=1 00:14:41.198 runtime=1 00:14:41.198 ioengine=libaio 00:14:41.198 direct=1 00:14:41.198 bs=4096 00:14:41.198 iodepth=1 00:14:41.198 norandommap=0 00:14:41.198 numjobs=1 00:14:41.198 00:14:41.198 verify_dump=1 00:14:41.198 verify_backlog=512 00:14:41.198 verify_state_save=0 00:14:41.198 do_verify=1 00:14:41.198 verify=crc32c-intel 00:14:41.198 [job0] 00:14:41.198 filename=/dev/nvme0n1 00:14:41.198 Could not set queue depth (nvme0n1) 00:14:41.198 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:41.198 fio-3.35 00:14:41.198 Starting 1 thread 00:14:42.583 00:14:42.583 job0: (groupid=0, jobs=1): err= 0: pid=3288436: Mon Nov 25 14:12:47 2024 00:14:42.583 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:14:42.583 slat (nsec): min=25181, max=56107, avg=25798.27, stdev=1977.18 00:14:42.583 clat (usec): min=652, max=1191, avg=963.35, stdev=70.85 00:14:42.583 lat (usec): min=678, max=1217, avg=989.15, stdev=70.69 00:14:42.583 clat percentiles (usec): 00:14:42.583 | 1.00th=[ 783], 5.00th=[ 824], 10.00th=[ 857], 20.00th=[ 914], 00:14:42.583 | 30.00th=[ 947], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 988], 00:14:42.583 | 70.00th=[ 1004], 80.00th=[ 1012], 90.00th=[ 1037], 95.00th=[ 1057], 00:14:42.583 | 99.00th=[ 1123], 99.50th=[ 1172], 99.90th=[ 1188], 99.95th=[ 1188], 00:14:42.583 | 99.99th=[ 1188] 00:14:42.583 write: IOPS=749, BW=2997KiB/s (3069kB/s)(3000KiB/1001msec); 0 zone resets 00:14:42.583 slat (usec): min=9, max=26798, avg=64.72, stdev=977.54 00:14:42.583 clat (usec): min=276, max=799, avg=578.12, stdev=90.92 00:14:42.583 lat (usec): min=286, max=27416, avg=642.84, stdev=983.59 00:14:42.583 clat percentiles (usec): 00:14:42.583 | 1.00th=[ 359], 5.00th=[ 400], 10.00th=[ 457], 20.00th=[ 498], 00:14:42.583 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 603], 00:14:42.583 | 70.00th=[ 627], 80.00th=[ 668], 90.00th=[ 685], 95.00th=[ 701], 00:14:42.583 | 99.00th=[ 750], 99.50th=[ 758], 99.90th=[ 799], 99.95th=[ 799], 00:14:42.583 | 99.99th=[ 799] 00:14:42.583 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:14:42.583 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:42.583 lat (usec) : 500=12.76%, 750=46.28%, 1000=29.08% 00:14:42.583 lat (msec) : 2=11.89% 00:14:42.583 cpu : usr=1.40%, sys=4.00%, ctx=1265, majf=0, minf=1 00:14:42.583 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:42.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:42.583 issued rwts: total=512,750,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:42.583 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:42.583 00:14:42.583 Run status group 0 (all jobs): 00:14:42.583 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:14:42.583 WRITE: bw=2997KiB/s (3069kB/s), 2997KiB/s-2997KiB/s (3069kB/s-3069kB/s), io=3000KiB (3072kB), run=1001-1001msec 00:14:42.583 00:14:42.583 Disk stats (read/write): 00:14:42.583 nvme0n1: ios=538/583, merge=0/0, ticks=1494/321, in_queue=1815, util=98.60% 00:14:42.583 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:42.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:42.583 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:42.583 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:14:42.583 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:42.583 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:42.583 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:42.583 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:42.583 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:14:42.583 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:42.583 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:42.583 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:42.583 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:14:42.583 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:42.583 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:14:42.583 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:42.583 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:42.583 rmmod nvme_tcp 00:14:42.583 rmmod nvme_fabrics 00:14:42.583 rmmod nvme_keyring 00:14:42.583 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:42.583 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:14:42.583 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:14:42.583 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3286893 ']' 00:14:42.583 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3286893 00:14:42.583 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3286893 ']' 00:14:42.583 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3286893 00:14:42.583 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:14:42.583 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.583 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3286893 00:14:42.843 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:42.843 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:42.843 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3286893' 00:14:42.843 killing process with pid 3286893 00:14:42.843 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3286893 00:14:42.843 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3286893 00:14:42.843 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:42.843 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:42.843 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:42.843 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:14:42.843 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:14:42.843 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:14:42.843 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:42.843 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:42.843 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:42.843 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.843 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:42.843 14:12:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.386 14:12:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:45.386 00:14:45.386 real 0m17.834s 00:14:45.386 user 0m45.447s 00:14:45.386 sys 0m6.622s 00:14:45.386 14:12:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:45.386 14:12:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:45.386 ************************************ 00:14:45.386 END TEST nvmf_nmic 00:14:45.386 ************************************ 00:14:45.386 14:12:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:45.386 14:12:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:45.386 14:12:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:45.386 14:12:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:45.386 ************************************ 00:14:45.386 START TEST nvmf_fio_target 00:14:45.386 ************************************ 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:45.386 * Looking for test storage... 00:14:45.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:45.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.386 --rc genhtml_branch_coverage=1 00:14:45.386 --rc genhtml_function_coverage=1 00:14:45.386 --rc genhtml_legend=1 00:14:45.386 --rc geninfo_all_blocks=1 00:14:45.386 --rc geninfo_unexecuted_blocks=1 00:14:45.386 00:14:45.386 ' 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:45.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.386 --rc genhtml_branch_coverage=1 00:14:45.386 --rc genhtml_function_coverage=1 00:14:45.386 --rc genhtml_legend=1 00:14:45.386 --rc geninfo_all_blocks=1 00:14:45.386 --rc geninfo_unexecuted_blocks=1 00:14:45.386 00:14:45.386 ' 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:45.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.386 --rc genhtml_branch_coverage=1 00:14:45.386 --rc genhtml_function_coverage=1 00:14:45.386 --rc genhtml_legend=1 00:14:45.386 --rc geninfo_all_blocks=1 00:14:45.386 --rc geninfo_unexecuted_blocks=1 00:14:45.386 00:14:45.386 ' 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:45.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:45.386 --rc genhtml_branch_coverage=1 00:14:45.386 --rc genhtml_function_coverage=1 00:14:45.386 --rc genhtml_legend=1 00:14:45.386 --rc geninfo_all_blocks=1 00:14:45.386 --rc geninfo_unexecuted_blocks=1 00:14:45.386 00:14:45.386 ' 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.386 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:45.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:45.387 14:12:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:53.530 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:53.530 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:53.530 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:53.530 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:53.530 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:53.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.501 ms 00:14:53.531 00:14:53.531 --- 10.0.0.2 ping statistics --- 00:14:53.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.531 rtt min/avg/max/mdev = 0.501/0.501/0.501/0.000 ms 00:14:53.531 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:53.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:53.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:14:53.531 00:14:53.531 --- 10.0.0.1 ping statistics --- 00:14:53.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.531 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:14:53.531 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:53.531 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:14:53.531 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:53.531 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:53.531 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:53.531 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:53.531 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:53.531 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:53.531 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:53.531 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:53.531 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:53.531 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:53.531 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.531 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3292833 00:14:53.531 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3292833 00:14:53.531 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:53.531 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3292833 ']' 00:14:53.531 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.531 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:53.531 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.531 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:53.531 14:12:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.531 [2024-11-25 14:12:57.632585] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:14:53.531 [2024-11-25 14:12:57.632641] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.531 [2024-11-25 14:12:57.732724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:53.531 [2024-11-25 14:12:57.787110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.531 [2024-11-25 14:12:57.787180] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.531 [2024-11-25 14:12:57.787189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.531 [2024-11-25 14:12:57.787198] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.531 [2024-11-25 14:12:57.787205] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.531 [2024-11-25 14:12:57.789209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.531 [2024-11-25 14:12:57.789297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.531 [2024-11-25 14:12:57.789506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.531 [2024-11-25 14:12:57.789506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:53.531 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:53.531 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:14:53.531 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:53.531 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:53.531 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.531 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.531 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:53.793 [2024-11-25 14:12:58.638701] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.793 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:53.793 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:53.793 14:12:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:54.054 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:54.054 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:54.314 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:54.314 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:54.576 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:54.576 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:54.576 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:54.837 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:54.837 14:12:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:55.098 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:55.098 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:55.357 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:55.357 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:55.357 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:55.618 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:55.618 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:55.879 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:55.879 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:55.879 14:13:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:56.141 [2024-11-25 14:13:01.116400] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:56.141 14:13:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:56.406 14:13:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:56.667 14:13:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:58.054 14:13:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:58.054 14:13:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:14:58.054 14:13:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:58.054 14:13:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:14:58.054 14:13:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:14:58.054 14:13:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:14:59.968 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:59.968 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:59.968 14:13:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:59.968 14:13:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:14:59.968 14:13:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:59.968 14:13:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:14:59.968 14:13:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:59.968 [global] 00:14:59.968 thread=1 00:14:59.968 invalidate=1 00:14:59.968 rw=write 00:14:59.968 time_based=1 00:14:59.968 runtime=1 00:14:59.968 ioengine=libaio 00:14:59.968 direct=1 00:14:59.968 bs=4096 00:14:59.968 iodepth=1 00:14:59.968 norandommap=0 00:14:59.968 numjobs=1 00:14:59.968 00:14:59.968 verify_dump=1 00:14:59.968 verify_backlog=512 00:14:59.968 verify_state_save=0 00:14:59.968 do_verify=1 00:14:59.968 verify=crc32c-intel 00:14:59.968 [job0] 00:14:59.968 filename=/dev/nvme0n1 00:14:59.968 [job1] 00:14:59.968 filename=/dev/nvme0n2 00:14:59.968 [job2] 00:14:59.968 filename=/dev/nvme0n3 00:14:59.968 [job3] 00:14:59.968 filename=/dev/nvme0n4 00:15:00.253 Could not set queue depth (nvme0n1) 00:15:00.253 Could not set queue depth (nvme0n2) 00:15:00.253 Could not set queue depth (nvme0n3) 00:15:00.253 Could not set queue depth (nvme0n4) 00:15:00.518 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:00.518 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:00.518 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:00.518 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:00.518 fio-3.35 00:15:00.518 Starting 4 threads 00:15:01.903 00:15:01.903 job0: (groupid=0, jobs=1): err= 0: pid=3294708: Mon Nov 25 14:13:06 2024 00:15:01.903 read: IOPS=19, BW=76.9KiB/s (78.8kB/s)(80.0KiB/1040msec) 00:15:01.903 slat (nsec): min=27090, max=28275, avg=27586.00, stdev=383.21 00:15:01.903 clat (usec): min=830, max=41430, avg=38980.10, stdev=8980.20 00:15:01.903 lat (usec): min=858, max=41458, avg=39007.69, stdev=8980.22 00:15:01.903 clat percentiles (usec): 00:15:01.903 | 1.00th=[ 832], 5.00th=[ 832], 10.00th=[40633], 20.00th=[41157], 00:15:01.903 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:01.903 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:01.903 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:15:01.903 | 99.99th=[41681] 00:15:01.903 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:15:01.903 slat (usec): min=9, max=317, avg=31.96, stdev=16.94 00:15:01.903 clat (usec): min=180, max=916, avg=464.61, stdev=155.13 00:15:01.903 lat (usec): min=202, max=981, avg=496.57, stdev=160.62 00:15:01.903 clat percentiles (usec): 00:15:01.903 | 1.00th=[ 204], 5.00th=[ 239], 10.00th=[ 285], 20.00th=[ 322], 00:15:01.903 | 30.00th=[ 367], 40.00th=[ 416], 50.00th=[ 449], 60.00th=[ 482], 00:15:01.903 | 70.00th=[ 529], 80.00th=[ 586], 90.00th=[ 693], 95.00th=[ 742], 00:15:01.903 | 99.00th=[ 889], 99.50th=[ 914], 99.90th=[ 914], 99.95th=[ 914], 00:15:01.903 | 99.99th=[ 914] 00:15:01.903 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:15:01.903 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:01.903 lat (usec) : 250=5.45%, 500=56.02%, 750=30.83%, 1000=4.14% 00:15:01.903 lat (msec) : 50=3.57% 00:15:01.903 cpu : usr=0.77%, sys=2.21%, ctx=536, majf=0, minf=1 00:15:01.903 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:01.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.903 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:01.904 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:01.904 job1: (groupid=0, jobs=1): err= 0: pid=3294709: Mon Nov 25 14:13:06 2024 00:15:01.904 read: IOPS=18, BW=75.2KiB/s (77.0kB/s)(76.0KiB/1011msec) 00:15:01.904 slat (nsec): min=27528, max=28656, avg=27930.95, stdev=342.14 00:15:01.904 clat (usec): min=40882, max=41102, avg=40967.55, stdev=48.77 00:15:01.904 lat (usec): min=40910, max=41131, avg=40995.48, stdev=48.87 00:15:01.904 clat percentiles (usec): 00:15:01.904 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:15:01.904 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:01.904 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:01.904 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:01.904 | 99.99th=[41157] 00:15:01.904 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:15:01.904 slat (nsec): min=9104, max=57260, avg=26940.90, stdev=13063.64 00:15:01.904 clat (usec): min=144, max=1002, avg=415.09, stdev=153.60 00:15:01.904 lat (usec): min=154, max=1038, avg=442.03, stdev=157.73 00:15:01.904 clat percentiles (usec): 00:15:01.904 | 1.00th=[ 157], 5.00th=[ 219], 10.00th=[ 233], 20.00th=[ 289], 00:15:01.904 | 30.00th=[ 318], 40.00th=[ 347], 50.00th=[ 396], 60.00th=[ 433], 00:15:01.904 | 70.00th=[ 478], 80.00th=[ 545], 90.00th=[ 619], 95.00th=[ 717], 00:15:01.904 | 99.00th=[ 832], 99.50th=[ 881], 99.90th=[ 1004], 99.95th=[ 1004], 00:15:01.904 | 99.99th=[ 1004] 00:15:01.904 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:15:01.904 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:01.904 lat (usec) : 250=13.75%, 500=57.82%, 750=21.85%, 1000=2.82% 00:15:01.904 lat (msec) : 2=0.19%, 50=3.58% 00:15:01.904 cpu : usr=0.69%, sys=1.88%, ctx=532, majf=0, minf=1 00:15:01.904 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:01.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.904 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:01.904 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:01.904 job2: (groupid=0, jobs=1): err= 0: pid=3294710: Mon Nov 25 14:13:06 2024 00:15:01.904 read: IOPS=20, BW=81.6KiB/s (83.6kB/s)(84.0KiB/1029msec) 00:15:01.904 slat (nsec): min=25160, max=30347, avg=26033.24, stdev=1164.07 00:15:01.904 clat (usec): min=564, max=41996, avg=33661.57, stdev=16357.20 00:15:01.904 lat (usec): min=591, max=42022, avg=33687.60, stdev=16356.59 00:15:01.904 clat percentiles (usec): 00:15:01.904 | 1.00th=[ 562], 5.00th=[ 758], 10.00th=[ 865], 20.00th=[41157], 00:15:01.904 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:01.904 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:15:01.904 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:01.904 | 99.99th=[42206] 00:15:01.904 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:15:01.904 slat (nsec): min=9933, max=70282, avg=30473.33, stdev=8704.67 00:15:01.904 clat (usec): min=250, max=959, avg=589.57, stdev=125.24 00:15:01.904 lat (usec): min=261, max=992, avg=620.05, stdev=127.40 00:15:01.904 clat percentiles (usec): 00:15:01.904 | 1.00th=[ 285], 5.00th=[ 371], 10.00th=[ 416], 20.00th=[ 486], 00:15:01.904 | 30.00th=[ 529], 40.00th=[ 562], 50.00th=[ 594], 60.00th=[ 627], 00:15:01.904 | 70.00th=[ 668], 80.00th=[ 693], 90.00th=[ 742], 95.00th=[ 791], 00:15:01.904 | 99.00th=[ 848], 99.50th=[ 889], 99.90th=[ 963], 99.95th=[ 963], 00:15:01.904 | 99.99th=[ 963] 00:15:01.904 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:15:01.904 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:01.904 lat (usec) : 500=22.51%, 750=64.73%, 1000=9.57% 00:15:01.904 lat (msec) : 50=3.19% 00:15:01.904 cpu : usr=1.07%, sys=1.26%, ctx=533, majf=0, minf=2 00:15:01.904 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:01.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.904 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:01.904 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:01.904 job3: (groupid=0, jobs=1): err= 0: pid=3294711: Mon Nov 25 14:13:06 2024 00:15:01.904 read: IOPS=28, BW=114KiB/s (117kB/s)(116KiB/1015msec) 00:15:01.904 slat (nsec): min=8633, max=31522, avg=26756.93, stdev=3657.69 00:15:01.904 clat (usec): min=850, max=42085, avg=23405.44, stdev=20583.43 00:15:01.904 lat (usec): min=859, max=42112, avg=23432.20, stdev=20584.23 00:15:01.904 clat percentiles (usec): 00:15:01.904 | 1.00th=[ 848], 5.00th=[ 881], 10.00th=[ 906], 20.00th=[ 971], 00:15:01.904 | 30.00th=[ 1004], 40.00th=[ 1045], 50.00th=[41157], 60.00th=[41157], 00:15:01.904 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:15:01.904 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:01.904 | 99.99th=[42206] 00:15:01.904 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:15:01.904 slat (usec): min=7, max=1115, avg=35.04, stdev=60.96 00:15:01.904 clat (usec): min=152, max=816, avg=608.63, stdev=109.67 00:15:01.904 lat (usec): min=168, max=1774, avg=643.66, stdev=129.38 00:15:01.904 clat percentiles (usec): 00:15:01.904 | 1.00th=[ 338], 5.00th=[ 396], 10.00th=[ 461], 20.00th=[ 515], 00:15:01.904 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 652], 00:15:01.904 | 70.00th=[ 685], 80.00th=[ 709], 90.00th=[ 734], 95.00th=[ 758], 00:15:01.904 | 99.00th=[ 791], 99.50th=[ 816], 99.90th=[ 816], 99.95th=[ 816], 00:15:01.904 | 99.99th=[ 816] 00:15:01.904 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:15:01.904 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:01.904 lat (usec) : 250=0.18%, 500=16.08%, 750=72.46%, 1000=7.21% 00:15:01.904 lat (msec) : 2=1.11%, 50=2.96% 00:15:01.904 cpu : usr=0.99%, sys=1.28%, ctx=545, majf=0, minf=1 00:15:01.904 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:01.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.904 issued rwts: total=29,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:01.904 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:01.904 00:15:01.904 Run status group 0 (all jobs): 00:15:01.904 READ: bw=342KiB/s (351kB/s), 75.2KiB/s-114KiB/s (77.0kB/s-117kB/s), io=356KiB (365kB), run=1011-1040msec 00:15:01.904 WRITE: bw=7877KiB/s (8066kB/s), 1969KiB/s-2026KiB/s (2016kB/s-2074kB/s), io=8192KiB (8389kB), run=1011-1040msec 00:15:01.904 00:15:01.904 Disk stats (read/write): 00:15:01.904 nvme0n1: ios=64/512, merge=0/0, ticks=680/187, in_queue=867, util=83.77% 00:15:01.904 nvme0n2: ios=63/512, merge=0/0, ticks=770/189, in_queue=959, util=87.64% 00:15:01.904 nvme0n3: ios=73/512, merge=0/0, ticks=602/289, in_queue=891, util=95.13% 00:15:01.904 nvme0n4: ios=78/512, merge=0/0, ticks=595/293, in_queue=888, util=97.22% 00:15:01.904 14:13:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:01.904 [global] 00:15:01.904 thread=1 00:15:01.904 invalidate=1 00:15:01.904 rw=randwrite 00:15:01.904 time_based=1 00:15:01.904 runtime=1 00:15:01.904 ioengine=libaio 00:15:01.904 direct=1 00:15:01.904 bs=4096 00:15:01.904 iodepth=1 00:15:01.904 norandommap=0 00:15:01.904 numjobs=1 00:15:01.904 00:15:01.904 verify_dump=1 00:15:01.904 verify_backlog=512 00:15:01.904 verify_state_save=0 00:15:01.904 do_verify=1 00:15:01.904 verify=crc32c-intel 00:15:01.904 [job0] 00:15:01.904 filename=/dev/nvme0n1 00:15:01.904 [job1] 00:15:01.904 filename=/dev/nvme0n2 00:15:01.904 [job2] 00:15:01.904 filename=/dev/nvme0n3 00:15:01.904 [job3] 00:15:01.904 filename=/dev/nvme0n4 00:15:01.904 Could not set queue depth (nvme0n1) 00:15:01.904 Could not set queue depth (nvme0n2) 00:15:01.904 Could not set queue depth (nvme0n3) 00:15:01.904 Could not set queue depth (nvme0n4) 00:15:02.195 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:02.195 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:02.195 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:02.195 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:02.195 fio-3.35 00:15:02.195 Starting 4 threads 00:15:03.583 00:15:03.583 job0: (groupid=0, jobs=1): err= 0: pid=3295235: Mon Nov 25 14:13:08 2024 00:15:03.583 read: IOPS=16, BW=67.6KiB/s (69.2kB/s)(68.0KiB/1006msec) 00:15:03.583 slat (nsec): min=26173, max=26908, avg=26398.65, stdev=179.21 00:15:03.583 clat (usec): min=1133, max=42060, avg=39481.27, stdev=9885.45 00:15:03.583 lat (usec): min=1159, max=42087, avg=39507.66, stdev=9885.50 00:15:03.583 clat percentiles (usec): 00:15:03.583 | 1.00th=[ 1139], 5.00th=[ 1139], 10.00th=[41157], 20.00th=[41681], 00:15:03.583 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:15:03.583 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:15:03.583 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:03.583 | 99.99th=[42206] 00:15:03.583 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:15:03.583 slat (nsec): min=9623, max=57571, avg=30113.62, stdev=9292.69 00:15:03.583 clat (usec): min=219, max=890, avg=611.08, stdev=109.57 00:15:03.583 lat (usec): min=229, max=922, avg=641.20, stdev=114.01 00:15:03.583 clat percentiles (usec): 00:15:03.583 | 1.00th=[ 351], 5.00th=[ 396], 10.00th=[ 461], 20.00th=[ 519], 00:15:03.583 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 652], 00:15:03.583 | 70.00th=[ 685], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 758], 00:15:03.583 | 99.00th=[ 832], 99.50th=[ 832], 99.90th=[ 889], 99.95th=[ 889], 00:15:03.583 | 99.99th=[ 889] 00:15:03.583 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:15:03.583 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:03.583 lat (usec) : 250=0.19%, 500=17.58%, 750=72.02%, 1000=6.99% 00:15:03.583 lat (msec) : 2=0.19%, 50=3.02% 00:15:03.583 cpu : usr=0.70%, sys=1.59%, ctx=532, majf=0, minf=1 00:15:03.583 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:03.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.583 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:03.583 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:03.583 job1: (groupid=0, jobs=1): err= 0: pid=3295236: Mon Nov 25 14:13:08 2024 00:15:03.583 read: IOPS=68, BW=273KiB/s (280kB/s)(280KiB/1024msec) 00:15:03.583 slat (nsec): min=7272, max=41644, avg=27048.31, stdev=4342.24 00:15:03.583 clat (usec): min=661, max=41983, avg=10212.24, stdev=17164.31 00:15:03.583 lat (usec): min=689, max=42011, avg=10239.29, stdev=17164.80 00:15:03.583 clat percentiles (usec): 00:15:03.583 | 1.00th=[ 660], 5.00th=[ 791], 10.00th=[ 848], 20.00th=[ 881], 00:15:03.583 | 30.00th=[ 906], 40.00th=[ 955], 50.00th=[ 988], 60.00th=[ 1004], 00:15:03.583 | 70.00th=[ 1020], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:15:03.583 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:03.583 | 99.99th=[42206] 00:15:03.583 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:15:03.583 slat (nsec): min=9292, max=57326, avg=30623.43, stdev=9832.06 00:15:03.583 clat (usec): min=219, max=991, avg=555.26, stdev=125.46 00:15:03.583 lat (usec): min=253, max=1024, avg=585.89, stdev=127.76 00:15:03.583 clat percentiles (usec): 00:15:03.584 | 1.00th=[ 269], 5.00th=[ 351], 10.00th=[ 388], 20.00th=[ 457], 00:15:03.584 | 30.00th=[ 490], 40.00th=[ 515], 50.00th=[ 553], 60.00th=[ 594], 00:15:03.584 | 70.00th=[ 619], 80.00th=[ 660], 90.00th=[ 709], 95.00th=[ 750], 00:15:03.584 | 99.00th=[ 848], 99.50th=[ 881], 99.90th=[ 988], 99.95th=[ 988], 00:15:03.584 | 99.99th=[ 988] 00:15:03.584 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:15:03.584 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:03.584 lat (usec) : 250=0.34%, 500=28.69%, 750=54.47%, 1000=11.51% 00:15:03.584 lat (msec) : 2=2.23%, 50=2.75% 00:15:03.584 cpu : usr=0.98%, sys=2.44%, ctx=583, majf=0, minf=1 00:15:03.584 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:03.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.584 issued rwts: total=70,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:03.584 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:03.584 job2: (groupid=0, jobs=1): err= 0: pid=3295237: Mon Nov 25 14:13:08 2024 00:15:03.584 read: IOPS=17, BW=69.4KiB/s (71.1kB/s)(72.0KiB/1037msec) 00:15:03.584 slat (nsec): min=10197, max=26761, avg=25293.89, stdev=3774.24 00:15:03.584 clat (usec): min=1013, max=42076, avg=39516.23, stdev=9616.87 00:15:03.584 lat (usec): min=1023, max=42102, avg=39541.52, stdev=9620.63 00:15:03.584 clat percentiles (usec): 00:15:03.584 | 1.00th=[ 1012], 5.00th=[ 1012], 10.00th=[41157], 20.00th=[41157], 00:15:03.584 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:15:03.584 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:15:03.584 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:03.584 | 99.99th=[42206] 00:15:03.584 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:15:03.584 slat (nsec): min=10118, max=65436, avg=30604.08, stdev=9108.84 00:15:03.584 clat (usec): min=245, max=1004, avg=591.84, stdev=112.46 00:15:03.584 lat (usec): min=256, max=1038, avg=622.44, stdev=115.67 00:15:03.584 clat percentiles (usec): 00:15:03.584 | 1.00th=[ 338], 5.00th=[ 392], 10.00th=[ 445], 20.00th=[ 494], 00:15:03.584 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 627], 00:15:03.584 | 70.00th=[ 652], 80.00th=[ 693], 90.00th=[ 734], 95.00th=[ 758], 00:15:03.584 | 99.00th=[ 824], 99.50th=[ 914], 99.90th=[ 1004], 99.95th=[ 1004], 00:15:03.584 | 99.99th=[ 1004] 00:15:03.584 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:15:03.584 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:03.584 lat (usec) : 250=0.19%, 500=20.94%, 750=69.81%, 1000=5.47% 00:15:03.584 lat (msec) : 2=0.38%, 50=3.21% 00:15:03.584 cpu : usr=0.77%, sys=1.45%, ctx=531, majf=0, minf=1 00:15:03.584 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:03.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.584 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:03.584 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:03.584 job3: (groupid=0, jobs=1): err= 0: pid=3295238: Mon Nov 25 14:13:08 2024 00:15:03.584 read: IOPS=16, BW=65.7KiB/s (67.3kB/s)(68.0KiB/1035msec) 00:15:03.584 slat (nsec): min=26270, max=27348, avg=26774.35, stdev=272.43 00:15:03.584 clat (usec): min=40909, max=42072, avg=41411.69, stdev=498.05 00:15:03.584 lat (usec): min=40937, max=42099, avg=41438.47, stdev=498.03 00:15:03.584 clat percentiles (usec): 00:15:03.584 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:15:03.584 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:15:03.584 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:15:03.584 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:03.584 | 99.99th=[42206] 00:15:03.584 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:15:03.584 slat (nsec): min=10069, max=56576, avg=32477.51, stdev=7325.54 00:15:03.584 clat (usec): min=269, max=919, avg=599.65, stdev=123.86 00:15:03.584 lat (usec): min=281, max=952, avg=632.12, stdev=125.30 00:15:03.584 clat percentiles (usec): 00:15:03.584 | 1.00th=[ 310], 5.00th=[ 388], 10.00th=[ 441], 20.00th=[ 486], 00:15:03.584 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 635], 00:15:03.584 | 70.00th=[ 668], 80.00th=[ 701], 90.00th=[ 758], 95.00th=[ 807], 00:15:03.584 | 99.00th=[ 873], 99.50th=[ 898], 99.90th=[ 922], 99.95th=[ 922], 00:15:03.584 | 99.99th=[ 922] 00:15:03.584 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:15:03.584 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:03.584 lat (usec) : 500=22.68%, 750=62.76%, 1000=11.34% 00:15:03.584 lat (msec) : 50=3.21% 00:15:03.584 cpu : usr=0.87%, sys=1.64%, ctx=531, majf=0, minf=1 00:15:03.584 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:03.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:03.584 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:03.584 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:03.584 00:15:03.584 Run status group 0 (all jobs): 00:15:03.584 READ: bw=471KiB/s (482kB/s), 65.7KiB/s-273KiB/s (67.3kB/s-280kB/s), io=488KiB (500kB), run=1006-1037msec 00:15:03.584 WRITE: bw=7900KiB/s (8089kB/s), 1975KiB/s-2036KiB/s (2022kB/s-2085kB/s), io=8192KiB (8389kB), run=1006-1037msec 00:15:03.584 00:15:03.584 Disk stats (read/write): 00:15:03.584 nvme0n1: ios=34/512, merge=0/0, ticks=1298/291, in_queue=1589, util=84.17% 00:15:03.584 nvme0n2: ios=107/512, merge=0/0, ticks=639/228, in_queue=867, util=91.34% 00:15:03.584 nvme0n3: ios=36/512, merge=0/0, ticks=1407/280, in_queue=1687, util=92.19% 00:15:03.584 nvme0n4: ios=62/512, merge=0/0, ticks=655/288, in_queue=943, util=95.73% 00:15:03.584 14:13:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:03.584 [global] 00:15:03.584 thread=1 00:15:03.584 invalidate=1 00:15:03.584 rw=write 00:15:03.584 time_based=1 00:15:03.584 runtime=1 00:15:03.584 ioengine=libaio 00:15:03.584 direct=1 00:15:03.584 bs=4096 00:15:03.584 iodepth=128 00:15:03.584 norandommap=0 00:15:03.584 numjobs=1 00:15:03.584 00:15:03.584 verify_dump=1 00:15:03.584 verify_backlog=512 00:15:03.584 verify_state_save=0 00:15:03.584 do_verify=1 00:15:03.584 verify=crc32c-intel 00:15:03.584 [job0] 00:15:03.584 filename=/dev/nvme0n1 00:15:03.584 [job1] 00:15:03.584 filename=/dev/nvme0n2 00:15:03.584 [job2] 00:15:03.584 filename=/dev/nvme0n3 00:15:03.584 [job3] 00:15:03.584 filename=/dev/nvme0n4 00:15:03.584 Could not set queue depth (nvme0n1) 00:15:03.584 Could not set queue depth (nvme0n2) 00:15:03.584 Could not set queue depth (nvme0n3) 00:15:03.584 Could not set queue depth (nvme0n4) 00:15:03.858 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:03.858 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:03.858 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:03.858 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:03.858 fio-3.35 00:15:03.858 Starting 4 threads 00:15:05.267 00:15:05.267 job0: (groupid=0, jobs=1): err= 0: pid=3295757: Mon Nov 25 14:13:10 2024 00:15:05.267 read: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec) 00:15:05.267 slat (nsec): min=906, max=10559k, avg=78730.88, stdev=555535.15 00:15:05.267 clat (usec): min=2210, max=33679, avg=10305.24, stdev=4156.90 00:15:05.267 lat (usec): min=2215, max=33686, avg=10383.97, stdev=4202.22 00:15:05.267 clat percentiles (usec): 00:15:05.267 | 1.00th=[ 5014], 5.00th=[ 5604], 10.00th=[ 6390], 20.00th=[ 6980], 00:15:05.267 | 30.00th=[ 7439], 40.00th=[ 8586], 50.00th=[ 9634], 60.00th=[10945], 00:15:05.267 | 70.00th=[11338], 80.00th=[12387], 90.00th=[16188], 95.00th=[18220], 00:15:05.267 | 99.00th=[24773], 99.50th=[28705], 99.90th=[32900], 99.95th=[33817], 00:15:05.267 | 99.99th=[33817] 00:15:05.267 write: IOPS=6490, BW=25.4MiB/s (26.6MB/s)(25.6MiB/1008msec); 0 zone resets 00:15:05.267 slat (nsec): min=1581, max=14744k, avg=72519.88, stdev=524047.42 00:15:05.267 clat (usec): min=1149, max=38271, avg=9877.44, stdev=6046.70 00:15:05.267 lat (usec): min=1157, max=38298, avg=9949.96, stdev=6093.80 00:15:05.267 clat percentiles (usec): 00:15:05.267 | 1.00th=[ 2376], 5.00th=[ 4490], 10.00th=[ 5473], 20.00th=[ 6259], 00:15:05.267 | 30.00th=[ 6849], 40.00th=[ 7308], 50.00th=[ 7570], 60.00th=[ 8848], 00:15:05.267 | 70.00th=[ 9896], 80.00th=[12125], 90.00th=[17433], 95.00th=[25035], 00:15:05.267 | 99.00th=[34341], 99.50th=[34866], 99.90th=[38011], 99.95th=[38011], 00:15:05.267 | 99.99th=[38011] 00:15:05.267 bw ( KiB/s): min=20480, max=30832, per=26.85%, avg=25656.00, stdev=7319.97, samples=2 00:15:05.267 iops : min= 5120, max= 7708, avg=6414.00, stdev=1829.99, samples=2 00:15:05.267 lat (msec) : 2=0.29%, 4=1.40%, 10=61.22%, 20=31.82%, 50=5.26% 00:15:05.267 cpu : usr=4.77%, sys=7.35%, ctx=435, majf=0, minf=1 00:15:05.267 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:15:05.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:05.267 issued rwts: total=6144,6542,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.267 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:05.267 job1: (groupid=0, jobs=1): err= 0: pid=3295758: Mon Nov 25 14:13:10 2024 00:15:05.267 read: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:15:05.267 slat (nsec): min=888, max=25010k, avg=77714.59, stdev=580694.80 00:15:05.267 clat (usec): min=1544, max=73184, avg=9939.97, stdev=8770.06 00:15:05.267 lat (usec): min=1550, max=73189, avg=10017.69, stdev=8833.26 00:15:05.267 clat percentiles (usec): 00:15:05.267 | 1.00th=[ 4817], 5.00th=[ 5211], 10.00th=[ 5669], 20.00th=[ 6128], 00:15:05.267 | 30.00th=[ 7046], 40.00th=[ 7504], 50.00th=[ 7832], 60.00th=[ 8160], 00:15:05.267 | 70.00th=[ 8848], 80.00th=[ 9765], 90.00th=[15270], 95.00th=[20579], 00:15:05.267 | 99.00th=[61080], 99.50th=[64750], 99.90th=[72877], 99.95th=[72877], 00:15:05.267 | 99.99th=[72877] 00:15:05.267 write: IOPS=7142, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1002msec); 0 zone resets 00:15:05.267 slat (nsec): min=1529, max=11328k, avg=60589.69, stdev=392057.87 00:15:05.267 clat (usec): min=642, max=43198, avg=8479.70, stdev=4889.48 00:15:05.267 lat (usec): min=650, max=43203, avg=8540.29, stdev=4915.28 00:15:05.267 clat percentiles (usec): 00:15:05.267 | 1.00th=[ 1549], 5.00th=[ 3621], 10.00th=[ 4948], 20.00th=[ 5800], 00:15:05.267 | 30.00th=[ 6587], 40.00th=[ 7046], 50.00th=[ 7439], 60.00th=[ 7767], 00:15:05.267 | 70.00th=[ 8225], 80.00th=[ 8979], 90.00th=[14222], 95.00th=[20579], 00:15:05.267 | 99.00th=[27395], 99.50th=[29754], 99.90th=[40633], 99.95th=[40633], 00:15:05.267 | 99.99th=[43254] 00:15:05.267 bw ( KiB/s): min=24576, max=31664, per=29.43%, avg=28120.00, stdev=5011.97, samples=2 00:15:05.267 iops : min= 6144, max= 7916, avg=7030.00, stdev=1252.99, samples=2 00:15:05.267 lat (usec) : 750=0.06%, 1000=0.01% 00:15:05.267 lat (msec) : 2=1.04%, 4=2.35%, 10=79.89%, 20=11.42%, 50=4.31% 00:15:05.267 lat (msec) : 100=0.92% 00:15:05.267 cpu : usr=4.00%, sys=7.99%, ctx=596, majf=0, minf=1 00:15:05.267 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:15:05.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:05.267 issued rwts: total=6656,7157,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.267 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:05.267 job2: (groupid=0, jobs=1): err= 0: pid=3295759: Mon Nov 25 14:13:10 2024 00:15:05.267 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec) 00:15:05.267 slat (nsec): min=1070, max=16175k, avg=96117.13, stdev=717728.91 00:15:05.267 clat (usec): min=3747, max=53000, avg=13670.59, stdev=8628.12 00:15:05.267 lat (usec): min=3753, max=62993, avg=13766.71, stdev=8697.09 00:15:05.267 clat percentiles (usec): 00:15:05.267 | 1.00th=[ 5145], 5.00th=[ 5997], 10.00th=[ 7111], 20.00th=[ 8160], 00:15:05.267 | 30.00th=[ 8848], 40.00th=[10028], 50.00th=[10814], 60.00th=[11600], 00:15:05.267 | 70.00th=[13304], 80.00th=[16319], 90.00th=[27657], 95.00th=[30016], 00:15:05.267 | 99.00th=[46924], 99.50th=[52167], 99.90th=[53216], 99.95th=[53216], 00:15:05.267 | 99.99th=[53216] 00:15:05.267 write: IOPS=4900, BW=19.1MiB/s (20.1MB/s)(19.3MiB/1009msec); 0 zone resets 00:15:05.267 slat (nsec): min=1658, max=15612k, avg=105383.86, stdev=799816.54 00:15:05.267 clat (usec): min=2091, max=47794, avg=13136.63, stdev=7486.36 00:15:05.267 lat (usec): min=2127, max=50760, avg=13242.01, stdev=7569.80 00:15:05.267 clat percentiles (usec): 00:15:05.267 | 1.00th=[ 2966], 5.00th=[ 5407], 10.00th=[ 6325], 20.00th=[ 7439], 00:15:05.267 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[10159], 60.00th=[12125], 00:15:05.267 | 70.00th=[15401], 80.00th=[18744], 90.00th=[25297], 95.00th=[27395], 00:15:05.268 | 99.00th=[36439], 99.50th=[40633], 99.90th=[40633], 99.95th=[42206], 00:15:05.268 | 99.99th=[47973] 00:15:05.268 bw ( KiB/s): min=13960, max=24576, per=20.16%, avg=19268.00, stdev=7506.65, samples=2 00:15:05.268 iops : min= 3490, max= 6144, avg=4817.00, stdev=1876.66, samples=2 00:15:05.268 lat (msec) : 4=1.21%, 10=42.01%, 20=40.30%, 50=16.02%, 100=0.46% 00:15:05.268 cpu : usr=4.17%, sys=5.56%, ctx=294, majf=0, minf=1 00:15:05.268 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:05.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:05.268 issued rwts: total=4608,4945,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:05.268 job3: (groupid=0, jobs=1): err= 0: pid=3295760: Mon Nov 25 14:13:10 2024 00:15:05.268 read: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec) 00:15:05.268 slat (nsec): min=966, max=10216k, avg=94664.10, stdev=550348.03 00:15:05.268 clat (usec): min=5481, max=35340, avg=12217.07, stdev=5324.86 00:15:05.268 lat (usec): min=5491, max=35366, avg=12311.74, stdev=5366.71 00:15:05.268 clat percentiles (usec): 00:15:05.268 | 1.00th=[ 5932], 5.00th=[ 7373], 10.00th=[ 7570], 20.00th=[ 8717], 00:15:05.268 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10683], 00:15:05.268 | 70.00th=[11469], 80.00th=[16057], 90.00th=[19792], 95.00th=[23987], 00:15:05.268 | 99.00th=[29230], 99.50th=[30016], 99.90th=[35390], 99.95th=[35390], 00:15:05.268 | 99.99th=[35390] 00:15:05.268 write: IOPS=5412, BW=21.1MiB/s (22.2MB/s)(21.3MiB/1009msec); 0 zone resets 00:15:05.268 slat (nsec): min=1636, max=12532k, avg=90085.88, stdev=597452.86 00:15:05.268 clat (usec): min=640, max=35611, avg=11898.72, stdev=5282.81 00:15:05.268 lat (usec): min=1757, max=35613, avg=11988.80, stdev=5317.64 00:15:05.268 clat percentiles (usec): 00:15:05.268 | 1.00th=[ 4490], 5.00th=[ 7046], 10.00th=[ 7504], 20.00th=[ 8455], 00:15:05.268 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10945], 00:15:05.268 | 70.00th=[11863], 80.00th=[14091], 90.00th=[20579], 95.00th=[23462], 00:15:05.268 | 99.00th=[32113], 99.50th=[33162], 99.90th=[35390], 99.95th=[35390], 00:15:05.268 | 99.99th=[35390] 00:15:05.268 bw ( KiB/s): min=18088, max=24576, per=22.32%, avg=21332.00, stdev=4587.71, samples=2 00:15:05.268 iops : min= 4522, max= 6144, avg=5333.00, stdev=1146.93, samples=2 00:15:05.268 lat (usec) : 750=0.01% 00:15:05.268 lat (msec) : 2=0.13%, 4=0.14%, 10=47.58%, 20=41.39%, 50=10.75% 00:15:05.268 cpu : usr=2.78%, sys=6.35%, ctx=447, majf=0, minf=2 00:15:05.268 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:05.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:05.268 issued rwts: total=5120,5461,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:05.268 00:15:05.268 Run status group 0 (all jobs): 00:15:05.268 READ: bw=87.2MiB/s (91.5MB/s), 17.8MiB/s-25.9MiB/s (18.7MB/s-27.2MB/s), io=88.0MiB (92.3MB), run=1002-1009msec 00:15:05.268 WRITE: bw=93.3MiB/s (97.9MB/s), 19.1MiB/s-27.9MiB/s (20.1MB/s-29.3MB/s), io=94.2MiB (98.7MB), run=1002-1009msec 00:15:05.268 00:15:05.268 Disk stats (read/write): 00:15:05.268 nvme0n1: ios=5170/5205, merge=0/0, ticks=38763/40127, in_queue=78890, util=87.07% 00:15:05.268 nvme0n2: ios=5074/5120, merge=0/0, ticks=17271/13294, in_queue=30565, util=88.69% 00:15:05.268 nvme0n3: ios=4390/4608, merge=0/0, ticks=28139/29825, in_queue=57964, util=91.14% 00:15:05.268 nvme0n4: ios=4636/4856, merge=0/0, ticks=18836/19663, in_queue=38499, util=96.37% 00:15:05.268 14:13:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:05.268 [global] 00:15:05.268 thread=1 00:15:05.268 invalidate=1 00:15:05.268 rw=randwrite 00:15:05.268 time_based=1 00:15:05.268 runtime=1 00:15:05.268 ioengine=libaio 00:15:05.268 direct=1 00:15:05.268 bs=4096 00:15:05.268 iodepth=128 00:15:05.268 norandommap=0 00:15:05.268 numjobs=1 00:15:05.268 00:15:05.268 verify_dump=1 00:15:05.268 verify_backlog=512 00:15:05.268 verify_state_save=0 00:15:05.268 do_verify=1 00:15:05.268 verify=crc32c-intel 00:15:05.268 [job0] 00:15:05.268 filename=/dev/nvme0n1 00:15:05.268 [job1] 00:15:05.268 filename=/dev/nvme0n2 00:15:05.268 [job2] 00:15:05.268 filename=/dev/nvme0n3 00:15:05.268 [job3] 00:15:05.268 filename=/dev/nvme0n4 00:15:05.268 Could not set queue depth (nvme0n1) 00:15:05.268 Could not set queue depth (nvme0n2) 00:15:05.268 Could not set queue depth (nvme0n3) 00:15:05.268 Could not set queue depth (nvme0n4) 00:15:05.534 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:05.534 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:05.534 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:05.534 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:05.534 fio-3.35 00:15:05.534 Starting 4 threads 00:15:06.918 00:15:06.918 job0: (groupid=0, jobs=1): err= 0: pid=3296284: Mon Nov 25 14:13:11 2024 00:15:06.918 read: IOPS=6135, BW=24.0MiB/s (25.1MB/s)(24.1MiB/1004msec) 00:15:06.918 slat (nsec): min=904, max=7724.5k, avg=50955.59, stdev=392914.51 00:15:06.918 clat (usec): min=1754, max=46879, avg=7756.07, stdev=3478.23 00:15:06.918 lat (usec): min=1762, max=46885, avg=7807.02, stdev=3512.03 00:15:06.918 clat percentiles (usec): 00:15:06.918 | 1.00th=[ 2802], 5.00th=[ 4015], 10.00th=[ 4752], 20.00th=[ 6128], 00:15:06.918 | 30.00th=[ 6521], 40.00th=[ 6915], 50.00th=[ 7111], 60.00th=[ 7701], 00:15:06.918 | 70.00th=[ 8094], 80.00th=[ 8717], 90.00th=[10421], 95.00th=[12387], 00:15:06.918 | 99.00th=[22938], 99.50th=[29754], 99.90th=[40109], 99.95th=[46924], 00:15:06.918 | 99.99th=[46924] 00:15:06.918 write: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec); 0 zone resets 00:15:06.918 slat (nsec): min=1520, max=7631.8k, avg=77952.77, stdev=509376.35 00:15:06.918 clat (usec): min=435, max=77477, avg=11968.46, stdev=14464.59 00:15:06.918 lat (usec): min=442, max=77485, avg=12046.42, stdev=14565.24 00:15:06.918 clat percentiles (usec): 00:15:06.918 | 1.00th=[ 1237], 5.00th=[ 2606], 10.00th=[ 4047], 20.00th=[ 4883], 00:15:06.918 | 30.00th=[ 5604], 40.00th=[ 6194], 50.00th=[ 7308], 60.00th=[ 8717], 00:15:06.918 | 70.00th=[ 9765], 80.00th=[13042], 90.00th=[22414], 95.00th=[52691], 00:15:06.918 | 99.00th=[70779], 99.50th=[73925], 99.90th=[77071], 99.95th=[77071], 00:15:06.918 | 99.99th=[77071] 00:15:06.918 bw ( KiB/s): min=19592, max=32768, per=28.58%, avg=26180.00, stdev=9316.84, samples=2 00:15:06.918 iops : min= 4898, max= 8192, avg=6545.00, stdev=2329.21, samples=2 00:15:06.918 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.09% 00:15:06.918 lat (msec) : 2=1.75%, 4=5.42%, 10=72.18%, 20=13.86%, 50=3.85% 00:15:06.918 lat (msec) : 100=2.84% 00:15:06.918 cpu : usr=4.89%, sys=8.08%, ctx=460, majf=0, minf=2 00:15:06.918 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:15:06.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.918 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:06.918 issued rwts: total=6160,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.918 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:06.918 job1: (groupid=0, jobs=1): err= 0: pid=3296285: Mon Nov 25 14:13:11 2024 00:15:06.918 read: IOPS=4642, BW=18.1MiB/s (19.0MB/s)(18.2MiB/1003msec) 00:15:06.918 slat (nsec): min=904, max=43570k, avg=126318.24, stdev=995618.16 00:15:06.918 clat (usec): min=991, max=53432, avg=14785.40, stdev=9082.46 00:15:06.918 lat (usec): min=3607, max=53440, avg=14911.72, stdev=9150.53 00:15:06.918 clat percentiles (usec): 00:15:06.918 | 1.00th=[ 3884], 5.00th=[ 7177], 10.00th=[ 7832], 20.00th=[ 8225], 00:15:06.918 | 30.00th=[ 8979], 40.00th=[10159], 50.00th=[11469], 60.00th=[14091], 00:15:06.919 | 70.00th=[15926], 80.00th=[20317], 90.00th=[24773], 95.00th=[31589], 00:15:06.919 | 99.00th=[52691], 99.50th=[53216], 99.90th=[53216], 99.95th=[53216], 00:15:06.919 | 99.99th=[53216] 00:15:06.919 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:15:06.919 slat (nsec): min=1476, max=7196.8k, avg=75116.77, stdev=441513.97 00:15:06.919 clat (usec): min=3896, max=53345, avg=11363.66, stdev=7912.98 00:15:06.919 lat (usec): min=4189, max=53351, avg=11438.78, stdev=7921.21 00:15:06.919 clat percentiles (usec): 00:15:06.919 | 1.00th=[ 5211], 5.00th=[ 6128], 10.00th=[ 6521], 20.00th=[ 7373], 00:15:06.919 | 30.00th=[ 7963], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9634], 00:15:06.919 | 70.00th=[10683], 80.00th=[13042], 90.00th=[16909], 95.00th=[21103], 00:15:06.919 | 99.00th=[51119], 99.50th=[52167], 99.90th=[52691], 99.95th=[52691], 00:15:06.919 | 99.99th=[53216] 00:15:06.919 bw ( KiB/s): min=19840, max=20480, per=22.01%, avg=20160.00, stdev=452.55, samples=2 00:15:06.919 iops : min= 4960, max= 5120, avg=5040.00, stdev=113.14, samples=2 00:15:06.919 lat (usec) : 1000=0.01% 00:15:06.919 lat (msec) : 4=0.49%, 10=51.61%, 20=35.10%, 50=10.68%, 100=2.12% 00:15:06.919 cpu : usr=3.09%, sys=5.49%, ctx=392, majf=0, minf=1 00:15:06.919 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:06.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:06.919 issued rwts: total=4656,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.919 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:06.919 job2: (groupid=0, jobs=1): err= 0: pid=3296286: Mon Nov 25 14:13:11 2024 00:15:06.919 read: IOPS=5643, BW=22.0MiB/s (23.1MB/s)(22.2MiB/1006msec) 00:15:06.919 slat (nsec): min=983, max=13726k, avg=77713.40, stdev=559049.47 00:15:06.919 clat (usec): min=3649, max=38761, avg=10458.43, stdev=4987.45 00:15:06.919 lat (usec): min=3657, max=39374, avg=10536.15, stdev=5022.19 00:15:06.919 clat percentiles (usec): 00:15:06.919 | 1.00th=[ 4817], 5.00th=[ 5866], 10.00th=[ 6980], 20.00th=[ 7373], 00:15:06.919 | 30.00th=[ 7898], 40.00th=[ 8356], 50.00th=[ 8979], 60.00th=[ 9896], 00:15:06.919 | 70.00th=[10945], 80.00th=[12780], 90.00th=[14484], 95.00th=[19530], 00:15:06.919 | 99.00th=[33162], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:15:06.919 | 99.99th=[38536] 00:15:06.919 write: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec); 0 zone resets 00:15:06.919 slat (nsec): min=1614, max=10042k, avg=83434.49, stdev=539610.94 00:15:06.919 clat (usec): min=674, max=35676, avg=11043.80, stdev=5998.98 00:15:06.919 lat (usec): min=708, max=35685, avg=11127.23, stdev=6036.74 00:15:06.919 clat percentiles (usec): 00:15:06.919 | 1.00th=[ 2008], 5.00th=[ 4817], 10.00th=[ 5604], 20.00th=[ 6521], 00:15:06.919 | 30.00th=[ 6980], 40.00th=[ 8094], 50.00th=[ 9110], 60.00th=[10945], 00:15:06.919 | 70.00th=[13173], 80.00th=[15401], 90.00th=[19006], 95.00th=[23200], 00:15:06.919 | 99.00th=[34341], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:15:06.919 | 99.99th=[35914] 00:15:06.919 bw ( KiB/s): min=20480, max=28016, per=26.47%, avg=24248.00, stdev=5328.76, samples=2 00:15:06.919 iops : min= 5120, max= 7004, avg=6062.00, stdev=1332.19, samples=2 00:15:06.919 lat (usec) : 750=0.03% 00:15:06.919 lat (msec) : 2=0.45%, 4=1.04%, 10=57.99%, 20=33.89%, 50=6.61% 00:15:06.919 cpu : usr=5.07%, sys=5.77%, ctx=453, majf=0, minf=1 00:15:06.919 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:15:06.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:06.919 issued rwts: total=5677,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.919 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:06.919 job3: (groupid=0, jobs=1): err= 0: pid=3296289: Mon Nov 25 14:13:11 2024 00:15:06.919 read: IOPS=5044, BW=19.7MiB/s (20.7MB/s)(19.8MiB/1004msec) 00:15:06.919 slat (nsec): min=984, max=47392k, avg=100592.75, stdev=889071.94 00:15:06.919 clat (usec): min=1603, max=70026, avg=12575.74, stdev=7894.21 00:15:06.919 lat (usec): min=4476, max=70033, avg=12676.34, stdev=7953.74 00:15:06.919 clat percentiles (usec): 00:15:06.919 | 1.00th=[ 4883], 5.00th=[ 5997], 10.00th=[ 7504], 20.00th=[ 8586], 00:15:06.919 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10552], 60.00th=[11338], 00:15:06.919 | 70.00th=[12780], 80.00th=[14615], 90.00th=[19006], 95.00th=[21103], 00:15:06.919 | 99.00th=[67634], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:15:06.919 | 99.99th=[69731] 00:15:06.919 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:15:06.919 slat (nsec): min=1620, max=12328k, avg=89860.96, stdev=555853.51 00:15:06.919 clat (usec): min=3872, max=67299, avg=12304.09, stdev=7772.82 00:15:06.919 lat (usec): min=4017, max=67325, avg=12393.95, stdev=7798.11 00:15:06.919 clat percentiles (usec): 00:15:06.919 | 1.00th=[ 4817], 5.00th=[ 6456], 10.00th=[ 7308], 20.00th=[ 8029], 00:15:06.919 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9634], 60.00th=[10683], 00:15:06.919 | 70.00th=[12518], 80.00th=[15664], 90.00th=[20055], 95.00th=[22676], 00:15:06.919 | 99.00th=[58983], 99.50th=[64226], 99.90th=[64226], 99.95th=[64226], 00:15:06.919 | 99.99th=[67634] 00:15:06.919 bw ( KiB/s): min=18296, max=22664, per=22.36%, avg=20480.00, stdev=3088.64, samples=2 00:15:06.919 iops : min= 4574, max= 5666, avg=5120.00, stdev=772.16, samples=2 00:15:06.919 lat (msec) : 2=0.01%, 4=0.01%, 10=45.87%, 20=45.60%, 50=7.27% 00:15:06.919 lat (msec) : 100=1.25% 00:15:06.919 cpu : usr=2.99%, sys=6.68%, ctx=410, majf=0, minf=1 00:15:06.919 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:06.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:06.919 issued rwts: total=5065,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.919 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:06.919 00:15:06.919 Run status group 0 (all jobs): 00:15:06.919 READ: bw=83.7MiB/s (87.8MB/s), 18.1MiB/s-24.0MiB/s (19.0MB/s-25.1MB/s), io=84.2MiB (88.3MB), run=1003-1006msec 00:15:06.919 WRITE: bw=89.5MiB/s (93.8MB/s), 19.9MiB/s-25.9MiB/s (20.9MB/s-27.2MB/s), io=90.0MiB (94.4MB), run=1003-1006msec 00:15:06.919 00:15:06.919 Disk stats (read/write): 00:15:06.919 nvme0n1: ios=4658/5055, merge=0/0, ticks=30732/54608, in_queue=85340, util=86.27% 00:15:06.919 nvme0n2: ios=3634/3982, merge=0/0, ticks=17057/10994, in_queue=28051, util=85.30% 00:15:06.919 nvme0n3: ios=4657/4615, merge=0/0, ticks=29221/25929, in_queue=55150, util=95.46% 00:15:06.919 nvme0n4: ios=4090/4096, merge=0/0, ticks=20182/18007, in_queue=38189, util=98.07% 00:15:06.919 14:13:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:06.919 14:13:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3296508 00:15:06.919 14:13:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:06.919 14:13:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:06.919 [global] 00:15:06.919 thread=1 00:15:06.919 invalidate=1 00:15:06.919 rw=read 00:15:06.919 time_based=1 00:15:06.919 runtime=10 00:15:06.919 ioengine=libaio 00:15:06.919 direct=1 00:15:06.919 bs=4096 00:15:06.919 iodepth=1 00:15:06.919 norandommap=1 00:15:06.919 numjobs=1 00:15:06.919 00:15:06.919 [job0] 00:15:06.919 filename=/dev/nvme0n1 00:15:06.919 [job1] 00:15:06.919 filename=/dev/nvme0n2 00:15:06.919 [job2] 00:15:06.919 filename=/dev/nvme0n3 00:15:06.919 [job3] 00:15:06.919 filename=/dev/nvme0n4 00:15:06.919 Could not set queue depth (nvme0n1) 00:15:06.919 Could not set queue depth (nvme0n2) 00:15:06.919 Could not set queue depth (nvme0n3) 00:15:06.919 Could not set queue depth (nvme0n4) 00:15:07.180 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:07.180 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:07.180 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:07.180 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:07.180 fio-3.35 00:15:07.180 Starting 4 threads 00:15:09.725 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:09.984 14:13:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:09.984 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10338304, buflen=4096 00:15:09.984 fio: pid=3296817, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:10.243 14:13:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:10.243 14:13:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:10.243 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=274432, buflen=4096 00:15:10.243 fio: pid=3296816, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:10.243 14:13:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:10.243 14:13:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:10.243 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11984896, buflen=4096 00:15:10.243 fio: pid=3296808, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:10.502 14:13:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:10.502 14:13:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:10.502 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=3260416, buflen=4096 00:15:10.502 fio: pid=3296810, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:10.502 00:15:10.503 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3296808: Mon Nov 25 14:13:15 2024 00:15:10.503 read: IOPS=995, BW=3981KiB/s (4076kB/s)(11.4MiB/2940msec) 00:15:10.503 slat (usec): min=6, max=15969, avg=45.78, stdev=524.55 00:15:10.503 clat (usec): min=455, max=1174, avg=945.18, stdev=82.67 00:15:10.503 lat (usec): min=482, max=17012, avg=990.97, stdev=532.34 00:15:10.503 clat percentiles (usec): 00:15:10.503 | 1.00th=[ 701], 5.00th=[ 783], 10.00th=[ 832], 20.00th=[ 889], 00:15:10.503 | 30.00th=[ 922], 40.00th=[ 947], 50.00th=[ 963], 60.00th=[ 979], 00:15:10.503 | 70.00th=[ 988], 80.00th=[ 1012], 90.00th=[ 1037], 95.00th=[ 1057], 00:15:10.503 | 99.00th=[ 1106], 99.50th=[ 1123], 99.90th=[ 1172], 99.95th=[ 1172], 00:15:10.503 | 99.99th=[ 1172] 00:15:10.503 bw ( KiB/s): min= 4032, max= 4200, per=50.53%, avg=4099.20, stdev=62.12, samples=5 00:15:10.503 iops : min= 1008, max= 1050, avg=1024.80, stdev=15.53, samples=5 00:15:10.503 lat (usec) : 500=0.03%, 750=2.80%, 1000=73.62% 00:15:10.503 lat (msec) : 2=23.51% 00:15:10.503 cpu : usr=1.97%, sys=3.78%, ctx=2932, majf=0, minf=1 00:15:10.503 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:10.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.503 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.503 issued rwts: total=2927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.503 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:10.503 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3296810: Mon Nov 25 14:13:15 2024 00:15:10.503 read: IOPS=256, BW=1023KiB/s (1047kB/s)(3184KiB/3113msec) 00:15:10.503 slat (usec): min=7, max=19688, avg=50.54, stdev=696.51 00:15:10.503 clat (usec): min=645, max=42100, avg=3826.16, stdev=10330.35 00:15:10.503 lat (usec): min=670, max=61003, avg=3876.74, stdev=10443.86 00:15:10.503 clat percentiles (usec): 00:15:10.503 | 1.00th=[ 709], 5.00th=[ 816], 10.00th=[ 873], 20.00th=[ 930], 00:15:10.503 | 30.00th=[ 971], 40.00th=[ 988], 50.00th=[ 1012], 60.00th=[ 1037], 00:15:10.503 | 70.00th=[ 1074], 80.00th=[ 1156], 90.00th=[ 1270], 95.00th=[41681], 00:15:10.503 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:10.503 | 99.99th=[42206] 00:15:10.503 bw ( KiB/s): min= 89, max= 3120, per=13.03%, avg=1057.50, stdev=1130.11, samples=6 00:15:10.503 iops : min= 22, max= 780, avg=264.33, stdev=282.57, samples=6 00:15:10.503 lat (usec) : 750=1.76%, 1000=42.41% 00:15:10.503 lat (msec) : 2=48.68%, 4=0.13%, 50=6.90% 00:15:10.503 cpu : usr=0.39%, sys=0.67%, ctx=801, majf=0, minf=2 00:15:10.503 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:10.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.503 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.503 issued rwts: total=797,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.503 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:10.503 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3296816: Mon Nov 25 14:13:15 2024 00:15:10.503 read: IOPS=24, BW=96.2KiB/s (98.5kB/s)(268KiB/2786msec) 00:15:10.503 slat (usec): min=27, max=15661, avg=257.97, stdev=1895.80 00:15:10.503 clat (usec): min=706, max=42093, avg=40988.85, stdev=5015.29 00:15:10.503 lat (usec): min=739, max=42120, avg=41016.92, stdev=5014.59 00:15:10.503 clat percentiles (usec): 00:15:10.503 | 1.00th=[ 709], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:15:10.503 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:15:10.503 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:15:10.503 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:10.503 | 99.99th=[42206] 00:15:10.503 bw ( KiB/s): min= 88, max= 104, per=1.20%, avg=97.60, stdev= 6.69, samples=5 00:15:10.503 iops : min= 22, max= 26, avg=24.40, stdev= 1.67, samples=5 00:15:10.503 lat (usec) : 750=1.47% 00:15:10.503 lat (msec) : 50=97.06% 00:15:10.503 cpu : usr=0.14%, sys=0.00%, ctx=70, majf=0, minf=2 00:15:10.503 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:10.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.503 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.503 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.503 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:10.503 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3296817: Mon Nov 25 14:13:15 2024 00:15:10.503 read: IOPS=970, BW=3880KiB/s (3973kB/s)(9.86MiB/2602msec) 00:15:10.503 slat (nsec): min=6896, max=61212, avg=27352.29, stdev=2529.63 00:15:10.503 clat (usec): min=622, max=41797, avg=988.78, stdev=815.81 00:15:10.503 lat (usec): min=649, max=41824, avg=1016.13, stdev=815.81 00:15:10.503 clat percentiles (usec): 00:15:10.503 | 1.00th=[ 766], 5.00th=[ 832], 10.00th=[ 873], 20.00th=[ 922], 00:15:10.503 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[ 979], 60.00th=[ 996], 00:15:10.503 | 70.00th=[ 1012], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1074], 00:15:10.503 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[ 1205], 99.95th=[ 1221], 00:15:10.503 | 99.99th=[41681] 00:15:10.503 bw ( KiB/s): min= 3656, max= 4016, per=48.28%, avg=3916.80, stdev=148.87, samples=5 00:15:10.503 iops : min= 914, max= 1004, avg=979.20, stdev=37.22, samples=5 00:15:10.503 lat (usec) : 750=0.51%, 1000=62.46% 00:15:10.503 lat (msec) : 2=36.95%, 50=0.04% 00:15:10.503 cpu : usr=1.46%, sys=4.27%, ctx=2526, majf=0, minf=2 00:15:10.503 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:10.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.503 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.503 issued rwts: total=2525,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.503 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:10.503 00:15:10.503 Run status group 0 (all jobs): 00:15:10.503 READ: bw=8112KiB/s (8306kB/s), 96.2KiB/s-3981KiB/s (98.5kB/s-4076kB/s), io=24.7MiB (25.9MB), run=2602-3113msec 00:15:10.503 00:15:10.503 Disk stats (read/write): 00:15:10.503 nvme0n1: ios=2865/0, merge=0/0, ticks=2545/0, in_queue=2545, util=93.66% 00:15:10.503 nvme0n2: ios=795/0, merge=0/0, ticks=2989/0, in_queue=2989, util=95.11% 00:15:10.503 nvme0n3: ios=107/0, merge=0/0, ticks=3558/0, in_queue=3558, util=99.48% 00:15:10.503 nvme0n4: ios=2524/0, merge=0/0, ticks=2439/0, in_queue=2439, util=96.46% 00:15:10.763 14:13:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:10.763 14:13:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:10.763 14:13:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:10.763 14:13:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:11.023 14:13:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:11.023 14:13:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:11.282 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:11.282 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:11.282 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:11.282 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3296508 00:15:11.282 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:11.282 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:11.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.542 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:11.542 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:15:11.542 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:11.542 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:11.542 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:11.542 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:11.542 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:15:11.542 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:11.542 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:11.542 nvmf hotplug test: fio failed as expected 00:15:11.542 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:11.802 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:11.802 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:11.802 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:11.802 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:11.802 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:11.802 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:11.802 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:15:11.802 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:11.802 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:15:11.802 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:11.802 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:11.802 rmmod nvme_tcp 00:15:11.802 rmmod nvme_fabrics 00:15:11.802 rmmod nvme_keyring 00:15:11.802 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:11.802 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:15:11.802 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:15:11.802 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3292833 ']' 00:15:11.802 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3292833 00:15:11.803 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3292833 ']' 00:15:11.803 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3292833 00:15:11.803 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:15:11.803 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:11.803 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3292833 00:15:11.803 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:11.803 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:11.803 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3292833' 00:15:11.803 killing process with pid 3292833 00:15:11.803 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3292833 00:15:11.803 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3292833 00:15:12.062 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:12.062 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:12.062 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:12.062 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:15:12.062 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:15:12.062 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:12.062 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:15:12.062 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:12.062 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:12.062 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:12.062 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:12.062 14:13:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:13.976 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:13.976 00:15:13.976 real 0m29.024s 00:15:13.976 user 2m31.962s 00:15:13.976 sys 0m9.275s 00:15:13.976 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.976 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.976 ************************************ 00:15:13.976 END TEST nvmf_fio_target 00:15:13.976 ************************************ 00:15:13.976 14:13:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:13.976 14:13:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:13.976 14:13:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.976 14:13:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:14.238 ************************************ 00:15:14.238 START TEST nvmf_bdevio 00:15:14.238 ************************************ 00:15:14.238 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:14.238 * Looking for test storage... 00:15:14.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:14.238 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:14.238 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:15:14.238 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:14.238 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:14.238 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:14.238 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:14.238 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:14.238 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:15:14.238 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:15:14.238 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:15:14.238 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:15:14.238 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:15:14.238 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:15:14.238 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:15:14.238 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:14.238 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:15:14.238 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:15:14.238 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:14.238 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:14.238 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:15:14.238 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:14.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.239 --rc genhtml_branch_coverage=1 00:15:14.239 --rc genhtml_function_coverage=1 00:15:14.239 --rc genhtml_legend=1 00:15:14.239 --rc geninfo_all_blocks=1 00:15:14.239 --rc geninfo_unexecuted_blocks=1 00:15:14.239 00:15:14.239 ' 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:14.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.239 --rc genhtml_branch_coverage=1 00:15:14.239 --rc genhtml_function_coverage=1 00:15:14.239 --rc genhtml_legend=1 00:15:14.239 --rc geninfo_all_blocks=1 00:15:14.239 --rc geninfo_unexecuted_blocks=1 00:15:14.239 00:15:14.239 ' 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:14.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.239 --rc genhtml_branch_coverage=1 00:15:14.239 --rc genhtml_function_coverage=1 00:15:14.239 --rc genhtml_legend=1 00:15:14.239 --rc geninfo_all_blocks=1 00:15:14.239 --rc geninfo_unexecuted_blocks=1 00:15:14.239 00:15:14.239 ' 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:14.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:14.239 --rc genhtml_branch_coverage=1 00:15:14.239 --rc genhtml_function_coverage=1 00:15:14.239 --rc genhtml_legend=1 00:15:14.239 --rc geninfo_all_blocks=1 00:15:14.239 --rc geninfo_unexecuted_blocks=1 00:15:14.239 00:15:14.239 ' 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:14.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:14.239 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:14.500 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:14.500 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:14.500 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:14.500 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:14.500 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:14.500 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:14.500 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:14.500 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:14.500 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.500 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:14.500 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:14.500 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:14.500 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:14.500 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:15:14.500 14:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:21.326 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:21.326 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:15:21.326 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:21.326 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:21.326 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:21.326 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:21.326 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:21.326 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:15:21.326 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:21.326 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:15:21.326 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:15:21.326 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:15:21.326 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:15:21.326 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:15:21.326 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:15:21.326 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:21.326 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:21.326 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:21.326 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:21.326 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:21.326 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:21.618 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:21.618 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:21.618 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:21.619 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:21.619 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:21.619 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:21.619 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:21.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:21.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:15:21.619 00:15:21.619 --- 10.0.0.2 ping statistics --- 00:15:21.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.619 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:15:21.619 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:21.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:21.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:15:21.619 00:15:21.619 --- 10.0.0.1 ping statistics --- 00:15:21.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.620 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:15:21.620 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:21.620 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:15:21.620 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:21.620 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:21.620 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:21.620 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:21.620 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:21.620 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:21.620 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:21.882 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:21.882 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:21.882 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:21.882 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:21.882 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3301857 00:15:21.882 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3301857 00:15:21.882 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:21.882 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3301857 ']' 00:15:21.882 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.882 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:21.882 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.882 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:21.882 14:13:26 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:21.882 [2024-11-25 14:13:26.816564] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:15:21.882 [2024-11-25 14:13:26.816636] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.882 [2024-11-25 14:13:26.917248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:21.882 [2024-11-25 14:13:26.970445] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:21.882 [2024-11-25 14:13:26.970495] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:21.882 [2024-11-25 14:13:26.970504] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:21.882 [2024-11-25 14:13:26.970512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:21.882 [2024-11-25 14:13:26.970519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.145 [2024-11-25 14:13:26.972863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:22.145 [2024-11-25 14:13:26.973024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:15:22.145 [2024-11-25 14:13:26.973201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:15:22.145 [2024-11-25 14:13:26.973245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:22.718 [2024-11-25 14:13:27.685627] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:22.718 Malloc0 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:22.718 [2024-11-25 14:13:27.761084] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:22.718 { 00:15:22.718 "params": { 00:15:22.718 "name": "Nvme$subsystem", 00:15:22.718 "trtype": "$TEST_TRANSPORT", 00:15:22.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:22.718 "adrfam": "ipv4", 00:15:22.718 "trsvcid": "$NVMF_PORT", 00:15:22.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:22.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:22.718 "hdgst": ${hdgst:-false}, 00:15:22.718 "ddgst": ${ddgst:-false} 00:15:22.718 }, 00:15:22.718 "method": "bdev_nvme_attach_controller" 00:15:22.718 } 00:15:22.718 EOF 00:15:22.718 )") 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:15:22.718 14:13:27 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:22.718 "params": { 00:15:22.718 "name": "Nvme1", 00:15:22.718 "trtype": "tcp", 00:15:22.718 "traddr": "10.0.0.2", 00:15:22.718 "adrfam": "ipv4", 00:15:22.718 "trsvcid": "4420", 00:15:22.718 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:22.718 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:22.718 "hdgst": false, 00:15:22.718 "ddgst": false 00:15:22.718 }, 00:15:22.718 "method": "bdev_nvme_attach_controller" 00:15:22.718 }' 00:15:22.979 [2024-11-25 14:13:27.820852] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:15:22.979 [2024-11-25 14:13:27.820917] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3302199 ] 00:15:22.979 [2024-11-25 14:13:27.912145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:22.979 [2024-11-25 14:13:27.968096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.979 [2024-11-25 14:13:27.968230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.979 [2024-11-25 14:13:27.968264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.239 I/O targets: 00:15:23.239 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:23.239 00:15:23.239 00:15:23.239 CUnit - A unit testing framework for C - Version 2.1-3 00:15:23.239 http://cunit.sourceforge.net/ 00:15:23.239 00:15:23.239 00:15:23.239 Suite: bdevio tests on: Nvme1n1 00:15:23.239 Test: blockdev write read block ...passed 00:15:23.500 Test: blockdev write zeroes read block ...passed 00:15:23.500 Test: blockdev write zeroes read no split ...passed 00:15:23.500 Test: blockdev write zeroes read split ...passed 00:15:23.500 Test: blockdev write zeroes read split partial ...passed 00:15:23.500 Test: blockdev reset ...[2024-11-25 14:13:28.377100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:23.500 [2024-11-25 14:13:28.377201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x541400 (9): Bad file descriptor 00:15:23.500 [2024-11-25 14:13:28.431288] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:15:23.500 passed 00:15:23.500 Test: blockdev write read 8 blocks ...passed 00:15:23.500 Test: blockdev write read size > 128k ...passed 00:15:23.500 Test: blockdev write read invalid size ...passed 00:15:23.500 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:23.500 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:23.500 Test: blockdev write read max offset ...passed 00:15:23.762 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:23.762 Test: blockdev writev readv 8 blocks ...passed 00:15:23.762 Test: blockdev writev readv 30 x 1block ...passed 00:15:23.762 Test: blockdev writev readv block ...passed 00:15:23.762 Test: blockdev writev readv size > 128k ...passed 00:15:23.762 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:23.762 Test: blockdev comparev and writev ...[2024-11-25 14:13:28.652224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:23.762 [2024-11-25 14:13:28.652272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:23.762 [2024-11-25 14:13:28.652288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:23.762 [2024-11-25 14:13:28.652297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:23.762 [2024-11-25 14:13:28.652706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:23.762 [2024-11-25 14:13:28.652720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:23.762 [2024-11-25 14:13:28.652735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:23.762 [2024-11-25 14:13:28.652743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:23.762 [2024-11-25 14:13:28.653117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:23.762 [2024-11-25 14:13:28.653130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:23.762 [2024-11-25 14:13:28.653145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:23.762 [2024-11-25 14:13:28.653154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:23.762 [2024-11-25 14:13:28.653546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:23.762 [2024-11-25 14:13:28.653559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:23.762 [2024-11-25 14:13:28.653575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:23.762 [2024-11-25 14:13:28.653584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:23.762 passed 00:15:23.762 Test: blockdev nvme passthru rw ...passed 00:15:23.762 Test: blockdev nvme passthru vendor specific ...[2024-11-25 14:13:28.737618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:23.762 [2024-11-25 14:13:28.737635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:23.762 [2024-11-25 14:13:28.737855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:23.762 [2024-11-25 14:13:28.737868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:23.762 [2024-11-25 14:13:28.738091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:23.762 [2024-11-25 14:13:28.738103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:23.762 [2024-11-25 14:13:28.738331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:23.762 [2024-11-25 14:13:28.738345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:23.762 passed 00:15:23.762 Test: blockdev nvme admin passthru ...passed 00:15:23.762 Test: blockdev copy ...passed 00:15:23.762 00:15:23.762 Run Summary: Type Total Ran Passed Failed Inactive 00:15:23.762 suites 1 1 n/a 0 0 00:15:23.762 tests 23 23 23 0 0 00:15:23.762 asserts 152 152 152 0 n/a 00:15:23.762 00:15:23.762 Elapsed time = 1.104 seconds 00:15:24.024 14:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:24.024 14:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.024 14:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:24.024 14:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.024 14:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:24.024 14:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:24.024 14:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:24.024 14:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:15:24.024 14:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:24.024 14:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:15:24.024 14:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:24.024 14:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:24.024 rmmod nvme_tcp 00:15:24.024 rmmod nvme_fabrics 00:15:24.024 rmmod nvme_keyring 00:15:24.024 14:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:24.024 14:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:15:24.024 14:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:15:24.024 14:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3301857 ']' 00:15:24.024 14:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3301857 00:15:24.024 14:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3301857 ']' 00:15:24.024 14:13:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3301857 00:15:24.024 14:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:15:24.024 14:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:24.024 14:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3301857 00:15:24.024 14:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:15:24.024 14:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:15:24.024 14:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3301857' 00:15:24.024 killing process with pid 3301857 00:15:24.024 14:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3301857 00:15:24.024 14:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3301857 00:15:24.288 14:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:24.288 14:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:24.288 14:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:24.288 14:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:15:24.288 14:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:15:24.288 14:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:24.288 14:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:15:24.288 14:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:24.288 14:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:24.288 14:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.288 14:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:24.288 14:13:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.202 14:13:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:26.202 00:15:26.202 real 0m12.183s 00:15:26.202 user 0m13.414s 00:15:26.202 sys 0m6.198s 00:15:26.202 14:13:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:26.202 14:13:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:26.202 ************************************ 00:15:26.202 END TEST nvmf_bdevio 00:15:26.202 ************************************ 00:15:26.463 14:13:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:26.463 00:15:26.463 real 5m4.215s 00:15:26.463 user 11m36.020s 00:15:26.463 sys 1m50.634s 00:15:26.463 14:13:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:26.463 14:13:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:26.463 ************************************ 00:15:26.463 END TEST nvmf_target_core 00:15:26.463 ************************************ 00:15:26.463 14:13:31 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:15:26.463 14:13:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:26.463 14:13:31 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:26.463 14:13:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:26.463 ************************************ 00:15:26.463 START TEST nvmf_target_extra 00:15:26.463 ************************************ 00:15:26.463 14:13:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:15:26.464 * Looking for test storage... 00:15:26.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:15:26.464 14:13:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:26.464 14:13:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:15:26.464 14:13:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:26.726 14:13:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:26.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.726 --rc genhtml_branch_coverage=1 00:15:26.726 --rc genhtml_function_coverage=1 00:15:26.726 --rc genhtml_legend=1 00:15:26.727 --rc geninfo_all_blocks=1 00:15:26.727 --rc geninfo_unexecuted_blocks=1 00:15:26.727 00:15:26.727 ' 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:26.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.727 --rc genhtml_branch_coverage=1 00:15:26.727 --rc genhtml_function_coverage=1 00:15:26.727 --rc genhtml_legend=1 00:15:26.727 --rc geninfo_all_blocks=1 00:15:26.727 --rc geninfo_unexecuted_blocks=1 00:15:26.727 00:15:26.727 ' 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:26.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.727 --rc genhtml_branch_coverage=1 00:15:26.727 --rc genhtml_function_coverage=1 00:15:26.727 --rc genhtml_legend=1 00:15:26.727 --rc geninfo_all_blocks=1 00:15:26.727 --rc geninfo_unexecuted_blocks=1 00:15:26.727 00:15:26.727 ' 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:26.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.727 --rc genhtml_branch_coverage=1 00:15:26.727 --rc genhtml_function_coverage=1 00:15:26.727 --rc genhtml_legend=1 00:15:26.727 --rc geninfo_all_blocks=1 00:15:26.727 --rc geninfo_unexecuted_blocks=1 00:15:26.727 00:15:26.727 ' 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:26.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:26.727 ************************************ 00:15:26.727 START TEST nvmf_example 00:15:26.727 ************************************ 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:15:26.727 * Looking for test storage... 00:15:26.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:15:26.727 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:26.989 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:26.989 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:26.989 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:26.989 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:26.989 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:15:26.989 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:15:26.989 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:15:26.989 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:15:26.989 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:15:26.989 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:15:26.989 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:15:26.989 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:26.989 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:15:26.989 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:15:26.989 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:26.989 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:26.989 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:26.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.990 --rc genhtml_branch_coverage=1 00:15:26.990 --rc genhtml_function_coverage=1 00:15:26.990 --rc genhtml_legend=1 00:15:26.990 --rc geninfo_all_blocks=1 00:15:26.990 --rc geninfo_unexecuted_blocks=1 00:15:26.990 00:15:26.990 ' 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:26.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.990 --rc genhtml_branch_coverage=1 00:15:26.990 --rc genhtml_function_coverage=1 00:15:26.990 --rc genhtml_legend=1 00:15:26.990 --rc geninfo_all_blocks=1 00:15:26.990 --rc geninfo_unexecuted_blocks=1 00:15:26.990 00:15:26.990 ' 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:26.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.990 --rc genhtml_branch_coverage=1 00:15:26.990 --rc genhtml_function_coverage=1 00:15:26.990 --rc genhtml_legend=1 00:15:26.990 --rc geninfo_all_blocks=1 00:15:26.990 --rc geninfo_unexecuted_blocks=1 00:15:26.990 00:15:26.990 ' 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:26.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.990 --rc genhtml_branch_coverage=1 00:15:26.990 --rc genhtml_function_coverage=1 00:15:26.990 --rc genhtml_legend=1 00:15:26.990 --rc geninfo_all_blocks=1 00:15:26.990 --rc geninfo_unexecuted_blocks=1 00:15:26.990 00:15:26.990 ' 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:26.990 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:15:26.990 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:35.135 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:35.135 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:15:35.135 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:35.135 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:35.135 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:35.135 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:35.135 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:35.135 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:15:35.135 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:35.135 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:15:35.135 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:15:35.135 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:15:35.135 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:15:35.135 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:15:35.135 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:15:35.135 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:35.135 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:35.135 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:35.135 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:35.135 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:35.135 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:35.135 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:35.135 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:35.135 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:35.136 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:35.136 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:35.136 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:35.136 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:35.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:15:35.136 00:15:35.136 --- 10.0.0.2 ping statistics --- 00:15:35.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.136 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:35.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:15:35.136 00:15:35.136 --- 10.0.0.1 ping statistics --- 00:15:35.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.136 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3306632 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3306632 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3306632 ']' 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.136 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.137 14:13:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:35.397 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.397 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:15:35.397 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:15:35.397 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:35.397 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:35.397 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:35.397 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.397 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:35.397 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.397 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:15:35.397 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.397 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:35.659 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.659 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:15:35.659 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:35.659 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.659 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:35.659 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.659 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:15:35.659 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:35.659 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.659 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:35.659 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.659 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:35.659 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.659 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:35.659 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.659 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:15:35.659 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:47.893 Initializing NVMe Controllers 00:15:47.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:47.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:47.894 Initialization complete. Launching workers. 00:15:47.894 ======================================================== 00:15:47.894 Latency(us) 00:15:47.894 Device Information : IOPS MiB/s Average min max 00:15:47.894 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19221.20 75.08 3331.40 620.26 16126.50 00:15:47.894 ======================================================== 00:15:47.894 Total : 19221.20 75.08 3331.40 620.26 16126.50 00:15:47.894 00:15:47.894 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:15:47.894 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:15:47.894 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:47.894 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:15:47.894 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:47.894 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:15:47.894 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:47.894 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:47.894 rmmod nvme_tcp 00:15:47.894 rmmod nvme_fabrics 00:15:47.894 rmmod nvme_keyring 00:15:47.894 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:47.894 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:15:47.894 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:15:47.894 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3306632 ']' 00:15:47.894 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3306632 00:15:47.894 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3306632 ']' 00:15:47.894 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3306632 00:15:47.894 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:15:47.894 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:47.894 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3306632 00:15:47.894 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:15:47.894 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:15:47.894 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3306632' 00:15:47.894 killing process with pid 3306632 00:15:47.894 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3306632 00:15:47.894 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3306632 00:15:47.894 nvmf threads initialize successfully 00:15:47.894 bdev subsystem init successfully 00:15:47.894 created a nvmf target service 00:15:47.894 create targets's poll groups done 00:15:47.894 all subsystems of target started 00:15:47.894 nvmf target is running 00:15:47.894 all subsystems of target stopped 00:15:47.894 destroy targets's poll groups done 00:15:47.894 destroyed the nvmf target service 00:15:47.894 bdev subsystem finish successfully 00:15:47.894 nvmf threads destroy successfully 00:15:47.894 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:47.894 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:47.894 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:47.894 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:15:47.894 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:15:47.894 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:15:47.894 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:47.894 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:47.894 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:47.894 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.894 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:47.894 14:13:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.154 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:48.154 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:15:48.154 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:48.154 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:48.154 00:15:48.154 real 0m21.549s 00:15:48.154 user 0m46.844s 00:15:48.154 sys 0m7.168s 00:15:48.154 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.154 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:48.154 ************************************ 00:15:48.154 END TEST nvmf_example 00:15:48.154 ************************************ 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:48.415 ************************************ 00:15:48.415 START TEST nvmf_filesystem 00:15:48.415 ************************************ 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:15:48.415 * Looking for test storage... 00:15:48.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:15:48.415 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:15:48.416 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.416 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:15:48.680 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:15:48.680 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:48.680 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:48.680 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:15:48.680 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.680 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:48.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.680 --rc genhtml_branch_coverage=1 00:15:48.680 --rc genhtml_function_coverage=1 00:15:48.680 --rc genhtml_legend=1 00:15:48.680 --rc geninfo_all_blocks=1 00:15:48.680 --rc geninfo_unexecuted_blocks=1 00:15:48.680 00:15:48.680 ' 00:15:48.680 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:48.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.680 --rc genhtml_branch_coverage=1 00:15:48.680 --rc genhtml_function_coverage=1 00:15:48.680 --rc genhtml_legend=1 00:15:48.680 --rc geninfo_all_blocks=1 00:15:48.680 --rc geninfo_unexecuted_blocks=1 00:15:48.680 00:15:48.680 ' 00:15:48.680 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:48.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.680 --rc genhtml_branch_coverage=1 00:15:48.680 --rc genhtml_function_coverage=1 00:15:48.680 --rc genhtml_legend=1 00:15:48.680 --rc geninfo_all_blocks=1 00:15:48.680 --rc geninfo_unexecuted_blocks=1 00:15:48.680 00:15:48.680 ' 00:15:48.680 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:48.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.681 --rc genhtml_branch_coverage=1 00:15:48.681 --rc genhtml_function_coverage=1 00:15:48.681 --rc genhtml_legend=1 00:15:48.681 --rc geninfo_all_blocks=1 00:15:48.681 --rc geninfo_unexecuted_blocks=1 00:15:48.681 00:15:48.681 ' 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:15:48.681 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:15:48.682 #define SPDK_CONFIG_H 00:15:48.682 #define SPDK_CONFIG_AIO_FSDEV 1 00:15:48.682 #define SPDK_CONFIG_APPS 1 00:15:48.682 #define SPDK_CONFIG_ARCH native 00:15:48.682 #undef SPDK_CONFIG_ASAN 00:15:48.682 #undef SPDK_CONFIG_AVAHI 00:15:48.682 #undef SPDK_CONFIG_CET 00:15:48.682 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:15:48.682 #define SPDK_CONFIG_COVERAGE 1 00:15:48.682 #define SPDK_CONFIG_CROSS_PREFIX 00:15:48.682 #undef SPDK_CONFIG_CRYPTO 00:15:48.682 #undef SPDK_CONFIG_CRYPTO_MLX5 00:15:48.682 #undef SPDK_CONFIG_CUSTOMOCF 00:15:48.682 #undef SPDK_CONFIG_DAOS 00:15:48.682 #define SPDK_CONFIG_DAOS_DIR 00:15:48.682 #define SPDK_CONFIG_DEBUG 1 00:15:48.682 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:15:48.682 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:15:48.682 #define SPDK_CONFIG_DPDK_INC_DIR 00:15:48.682 #define SPDK_CONFIG_DPDK_LIB_DIR 00:15:48.682 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:15:48.682 #undef SPDK_CONFIG_DPDK_UADK 00:15:48.682 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:15:48.682 #define SPDK_CONFIG_EXAMPLES 1 00:15:48.682 #undef SPDK_CONFIG_FC 00:15:48.682 #define SPDK_CONFIG_FC_PATH 00:15:48.682 #define SPDK_CONFIG_FIO_PLUGIN 1 00:15:48.682 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:15:48.682 #define SPDK_CONFIG_FSDEV 1 00:15:48.682 #undef SPDK_CONFIG_FUSE 00:15:48.682 #undef SPDK_CONFIG_FUZZER 00:15:48.682 #define SPDK_CONFIG_FUZZER_LIB 00:15:48.682 #undef SPDK_CONFIG_GOLANG 00:15:48.682 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:15:48.682 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:15:48.682 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:15:48.682 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:15:48.682 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:15:48.682 #undef SPDK_CONFIG_HAVE_LIBBSD 00:15:48.682 #undef SPDK_CONFIG_HAVE_LZ4 00:15:48.682 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:15:48.682 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:15:48.682 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:15:48.682 #define SPDK_CONFIG_IDXD 1 00:15:48.682 #define SPDK_CONFIG_IDXD_KERNEL 1 00:15:48.682 #undef SPDK_CONFIG_IPSEC_MB 00:15:48.682 #define SPDK_CONFIG_IPSEC_MB_DIR 00:15:48.682 #define SPDK_CONFIG_ISAL 1 00:15:48.682 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:15:48.682 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:15:48.682 #define SPDK_CONFIG_LIBDIR 00:15:48.682 #undef SPDK_CONFIG_LTO 00:15:48.682 #define SPDK_CONFIG_MAX_LCORES 128 00:15:48.682 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:15:48.682 #define SPDK_CONFIG_NVME_CUSE 1 00:15:48.682 #undef SPDK_CONFIG_OCF 00:15:48.682 #define SPDK_CONFIG_OCF_PATH 00:15:48.682 #define SPDK_CONFIG_OPENSSL_PATH 00:15:48.682 #undef SPDK_CONFIG_PGO_CAPTURE 00:15:48.682 #define SPDK_CONFIG_PGO_DIR 00:15:48.682 #undef SPDK_CONFIG_PGO_USE 00:15:48.682 #define SPDK_CONFIG_PREFIX /usr/local 00:15:48.682 #undef SPDK_CONFIG_RAID5F 00:15:48.682 #undef SPDK_CONFIG_RBD 00:15:48.682 #define SPDK_CONFIG_RDMA 1 00:15:48.682 #define SPDK_CONFIG_RDMA_PROV verbs 00:15:48.682 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:15:48.682 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:15:48.682 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:15:48.682 #define SPDK_CONFIG_SHARED 1 00:15:48.682 #undef SPDK_CONFIG_SMA 00:15:48.682 #define SPDK_CONFIG_TESTS 1 00:15:48.682 #undef SPDK_CONFIG_TSAN 00:15:48.682 #define SPDK_CONFIG_UBLK 1 00:15:48.682 #define SPDK_CONFIG_UBSAN 1 00:15:48.682 #undef SPDK_CONFIG_UNIT_TESTS 00:15:48.682 #undef SPDK_CONFIG_URING 00:15:48.682 #define SPDK_CONFIG_URING_PATH 00:15:48.682 #undef SPDK_CONFIG_URING_ZNS 00:15:48.682 #undef SPDK_CONFIG_USDT 00:15:48.682 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:15:48.682 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:15:48.682 #define SPDK_CONFIG_VFIO_USER 1 00:15:48.682 #define SPDK_CONFIG_VFIO_USER_DIR 00:15:48.682 #define SPDK_CONFIG_VHOST 1 00:15:48.682 #define SPDK_CONFIG_VIRTIO 1 00:15:48.682 #undef SPDK_CONFIG_VTUNE 00:15:48.682 #define SPDK_CONFIG_VTUNE_DIR 00:15:48.682 #define SPDK_CONFIG_WERROR 1 00:15:48.682 #define SPDK_CONFIG_WPDK_DIR 00:15:48.682 #undef SPDK_CONFIG_XNVME 00:15:48.682 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:15:48.682 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:15:48.683 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:15:48.684 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3309411 ]] 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3309411 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.tseXNA 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.tseXNA/tests/target /tmp/spdk.tseXNA 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=118284013568 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356509184 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11072495616 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64666886144 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678252544 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847934976 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871302656 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23367680 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677371904 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678256640 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=884736 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935634944 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935647232 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:15:48.685 * Looking for test storage... 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=118284013568 00:15:48.685 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=13287088128 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:48.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.686 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:48.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.948 --rc genhtml_branch_coverage=1 00:15:48.948 --rc genhtml_function_coverage=1 00:15:48.948 --rc genhtml_legend=1 00:15:48.948 --rc geninfo_all_blocks=1 00:15:48.948 --rc geninfo_unexecuted_blocks=1 00:15:48.948 00:15:48.948 ' 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:48.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.948 --rc genhtml_branch_coverage=1 00:15:48.948 --rc genhtml_function_coverage=1 00:15:48.948 --rc genhtml_legend=1 00:15:48.948 --rc geninfo_all_blocks=1 00:15:48.948 --rc geninfo_unexecuted_blocks=1 00:15:48.948 00:15:48.948 ' 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:48.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.948 --rc genhtml_branch_coverage=1 00:15:48.948 --rc genhtml_function_coverage=1 00:15:48.948 --rc genhtml_legend=1 00:15:48.948 --rc geninfo_all_blocks=1 00:15:48.948 --rc geninfo_unexecuted_blocks=1 00:15:48.948 00:15:48.948 ' 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:48.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.948 --rc genhtml_branch_coverage=1 00:15:48.948 --rc genhtml_function_coverage=1 00:15:48.948 --rc genhtml_legend=1 00:15:48.948 --rc geninfo_all_blocks=1 00:15:48.948 --rc geninfo_unexecuted_blocks=1 00:15:48.948 00:15:48.948 ' 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.948 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:48.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:15:48.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:57.092 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:57.093 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:57.093 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:57.093 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:57.093 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:57.093 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:57.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:57.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:15:57.093 00:15:57.093 --- 10.0.0.2 ping statistics --- 00:15:57.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.093 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:57.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:57.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:15:57.093 00:15:57.093 --- 10.0.0.1 ping statistics --- 00:15:57.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.093 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:57.093 ************************************ 00:15:57.093 START TEST nvmf_filesystem_no_in_capsule 00:15:57.093 ************************************ 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3313366 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3313366 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3313366 ']' 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.093 14:14:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:57.093 [2024-11-25 14:14:01.432848] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:15:57.094 [2024-11-25 14:14:01.432910] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.094 [2024-11-25 14:14:01.533824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:57.094 [2024-11-25 14:14:01.586458] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.094 [2024-11-25 14:14:01.586510] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.094 [2024-11-25 14:14:01.586519] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.094 [2024-11-25 14:14:01.586526] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.094 [2024-11-25 14:14:01.586533] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.094 [2024-11-25 14:14:01.588909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.094 [2024-11-25 14:14:01.589074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.094 [2024-11-25 14:14:01.589237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.094 [2024-11-25 14:14:01.589238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:57.355 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:57.355 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:15:57.355 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:57.355 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:57.355 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:57.355 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.355 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:57.355 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:57.355 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.355 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:57.355 [2024-11-25 14:14:02.309284] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.355 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.355 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:57.355 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.355 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:57.355 Malloc1 00:15:57.355 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.355 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:57.355 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.355 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:57.355 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.355 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:57.355 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.355 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:57.617 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.617 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.617 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.617 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:57.617 [2024-11-25 14:14:02.459980] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.617 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.617 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:57.617 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:15:57.617 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:57.617 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:15:57.617 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:15:57.617 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:57.617 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.617 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:57.617 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.617 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:57.617 { 00:15:57.617 "name": "Malloc1", 00:15:57.617 "aliases": [ 00:15:57.617 "e546c928-3ca4-4b12-99b4-bd7935f006ab" 00:15:57.617 ], 00:15:57.617 "product_name": "Malloc disk", 00:15:57.617 "block_size": 512, 00:15:57.617 "num_blocks": 1048576, 00:15:57.617 "uuid": "e546c928-3ca4-4b12-99b4-bd7935f006ab", 00:15:57.617 "assigned_rate_limits": { 00:15:57.617 "rw_ios_per_sec": 0, 00:15:57.617 "rw_mbytes_per_sec": 0, 00:15:57.617 "r_mbytes_per_sec": 0, 00:15:57.617 "w_mbytes_per_sec": 0 00:15:57.617 }, 00:15:57.617 "claimed": true, 00:15:57.617 "claim_type": "exclusive_write", 00:15:57.617 "zoned": false, 00:15:57.617 "supported_io_types": { 00:15:57.617 "read": true, 00:15:57.617 "write": true, 00:15:57.617 "unmap": true, 00:15:57.617 "flush": true, 00:15:57.617 "reset": true, 00:15:57.617 "nvme_admin": false, 00:15:57.617 "nvme_io": false, 00:15:57.617 "nvme_io_md": false, 00:15:57.617 "write_zeroes": true, 00:15:57.617 "zcopy": true, 00:15:57.617 "get_zone_info": false, 00:15:57.617 "zone_management": false, 00:15:57.617 "zone_append": false, 00:15:57.617 "compare": false, 00:15:57.617 "compare_and_write": false, 00:15:57.617 "abort": true, 00:15:57.617 "seek_hole": false, 00:15:57.617 "seek_data": false, 00:15:57.617 "copy": true, 00:15:57.617 "nvme_iov_md": false 00:15:57.617 }, 00:15:57.617 "memory_domains": [ 00:15:57.617 { 00:15:57.617 "dma_device_id": "system", 00:15:57.617 "dma_device_type": 1 00:15:57.617 }, 00:15:57.617 { 00:15:57.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.617 "dma_device_type": 2 00:15:57.617 } 00:15:57.617 ], 00:15:57.617 "driver_specific": {} 00:15:57.617 } 00:15:57.617 ]' 00:15:57.617 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:57.617 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:15:57.617 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:57.617 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:15:57.617 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:15:57.617 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:15:57.617 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:57.617 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:59.532 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:59.532 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:15:59.532 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:59.532 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:59.532 14:14:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:16:01.444 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:01.444 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:01.444 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:01.444 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:01.444 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:01.444 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:16:01.444 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:16:01.445 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:16:01.445 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:16:01.445 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:16:01.445 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:16:01.445 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:01.445 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:16:01.445 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:16:01.445 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:16:01.445 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:16:01.445 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:16:01.445 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:16:02.015 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:16:02.958 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:16:02.958 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:16:02.958 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:02.958 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:02.958 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:02.958 ************************************ 00:16:02.958 START TEST filesystem_ext4 00:16:02.958 ************************************ 00:16:02.958 14:14:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:16:02.958 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:16:02.958 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:02.958 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:16:02.958 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:16:02.958 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:02.958 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:16:02.958 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:16:02.958 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:16:02.959 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:16:02.959 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:16:02.959 mke2fs 1.47.0 (5-Feb-2023) 00:16:03.219 Discarding device blocks: 0/522240 done 00:16:03.219 Creating filesystem with 522240 1k blocks and 130560 inodes 00:16:03.219 Filesystem UUID: adcbc18b-1915-48f8-9570-a4787d733b7d 00:16:03.219 Superblock backups stored on blocks: 00:16:03.219 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:16:03.219 00:16:03.219 Allocating group tables: 0/64 done 00:16:03.219 Writing inode tables: 0/64 done 00:16:05.764 Creating journal (8192 blocks): done 00:16:08.127 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:16:08.127 00:16:08.127 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:16:08.127 14:14:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:13.475 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:13.475 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:16:13.475 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:13.475 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:16:13.475 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:16:13.475 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:13.475 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3313366 00:16:13.475 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:13.475 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:13.475 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:13.475 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:13.475 00:16:13.475 real 0m10.519s 00:16:13.475 user 0m0.029s 00:16:13.475 sys 0m0.081s 00:16:13.475 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.475 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:16:13.475 ************************************ 00:16:13.475 END TEST filesystem_ext4 00:16:13.475 ************************************ 00:16:13.475 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:16:13.475 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:13.475 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.475 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:13.735 ************************************ 00:16:13.735 START TEST filesystem_btrfs 00:16:13.735 ************************************ 00:16:13.735 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:16:13.735 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:16:13.735 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:13.735 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:16:13.735 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:16:13.735 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:13.735 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:16:13.735 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:16:13.735 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:16:13.735 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:16:13.735 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:16:13.995 btrfs-progs v6.8.1 00:16:13.995 See https://btrfs.readthedocs.io for more information. 00:16:13.995 00:16:13.995 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:16:13.995 NOTE: several default settings have changed in version 5.15, please make sure 00:16:13.995 this does not affect your deployments: 00:16:13.995 - DUP for metadata (-m dup) 00:16:13.995 - enabled no-holes (-O no-holes) 00:16:13.995 - enabled free-space-tree (-R free-space-tree) 00:16:13.995 00:16:13.995 Label: (null) 00:16:13.995 UUID: d803f14c-7c18-4511-b7a2-0a7ea97d164a 00:16:13.995 Node size: 16384 00:16:13.995 Sector size: 4096 (CPU page size: 4096) 00:16:13.995 Filesystem size: 510.00MiB 00:16:13.995 Block group profiles: 00:16:13.995 Data: single 8.00MiB 00:16:13.995 Metadata: DUP 32.00MiB 00:16:13.995 System: DUP 8.00MiB 00:16:13.995 SSD detected: yes 00:16:13.995 Zoned device: no 00:16:13.995 Features: extref, skinny-metadata, no-holes, free-space-tree 00:16:13.995 Checksum: crc32c 00:16:13.995 Number of devices: 1 00:16:13.995 Devices: 00:16:13.995 ID SIZE PATH 00:16:13.995 1 510.00MiB /dev/nvme0n1p1 00:16:13.995 00:16:13.995 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:16:13.995 14:14:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:14.566 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:14.566 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:16:14.566 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:14.566 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:16:14.566 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:16:14.566 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:14.827 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3313366 00:16:14.827 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:14.827 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:14.827 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:14.827 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:14.827 00:16:14.827 real 0m1.099s 00:16:14.827 user 0m0.020s 00:16:14.827 sys 0m0.128s 00:16:14.827 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:14.827 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:16:14.827 ************************************ 00:16:14.827 END TEST filesystem_btrfs 00:16:14.827 ************************************ 00:16:14.827 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:16:14.827 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:14.827 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:14.827 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:14.827 ************************************ 00:16:14.827 START TEST filesystem_xfs 00:16:14.827 ************************************ 00:16:14.827 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:16:14.827 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:16:14.827 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:14.827 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:16:14.827 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:16:14.827 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:14.827 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:16:14.827 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:16:14.827 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:16:14.827 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:16:14.827 14:14:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:16:14.827 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:16:14.827 = sectsz=512 attr=2, projid32bit=1 00:16:14.827 = crc=1 finobt=1, sparse=1, rmapbt=0 00:16:14.827 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:16:14.827 data = bsize=4096 blocks=130560, imaxpct=25 00:16:14.827 = sunit=0 swidth=0 blks 00:16:14.827 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:16:14.827 log =internal log bsize=4096 blocks=16384, version=2 00:16:14.828 = sectsz=512 sunit=0 blks, lazy-count=1 00:16:14.828 realtime =none extsz=4096 blocks=0, rtextents=0 00:16:15.769 Discarding blocks...Done. 00:16:15.769 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:16:15.769 14:14:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:17.680 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:17.680 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:16:17.680 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:17.680 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:16:17.680 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:16:17.680 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:17.680 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3313366 00:16:17.680 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:17.680 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:17.680 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:17.680 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:17.680 00:16:17.680 real 0m2.671s 00:16:17.680 user 0m0.026s 00:16:17.680 sys 0m0.081s 00:16:17.680 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.680 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:16:17.680 ************************************ 00:16:17.680 END TEST filesystem_xfs 00:16:17.680 ************************************ 00:16:17.680 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:16:17.941 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:16:17.942 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:17.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.942 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:17.942 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:16:17.942 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:17.942 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:17.942 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:17.942 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:17.942 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:16:17.942 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:17.942 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.942 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:17.942 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.942 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:17.942 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3313366 00:16:17.942 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3313366 ']' 00:16:17.942 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3313366 00:16:17.942 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:16:17.942 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:17.942 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3313366 00:16:17.942 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:17.942 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:17.942 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3313366' 00:16:17.942 killing process with pid 3313366 00:16:17.942 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3313366 00:16:17.942 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3313366 00:16:18.202 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:16:18.202 00:16:18.202 real 0m21.830s 00:16:18.202 user 1m26.272s 00:16:18.202 sys 0m1.546s 00:16:18.202 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:18.202 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:18.202 ************************************ 00:16:18.202 END TEST nvmf_filesystem_no_in_capsule 00:16:18.202 ************************************ 00:16:18.202 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:16:18.202 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:18.202 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:18.202 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:18.202 ************************************ 00:16:18.202 START TEST nvmf_filesystem_in_capsule 00:16:18.203 ************************************ 00:16:18.203 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:16:18.203 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:16:18.203 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:16:18.203 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:18.203 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:18.203 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:18.203 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3317654 00:16:18.203 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3317654 00:16:18.203 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:18.203 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3317654 ']' 00:16:18.203 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.203 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:18.203 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.203 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:18.203 14:14:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:18.464 [2024-11-25 14:14:23.344567] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:16:18.464 [2024-11-25 14:14:23.344618] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.464 [2024-11-25 14:14:23.435237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:18.464 [2024-11-25 14:14:23.469055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:18.464 [2024-11-25 14:14:23.469086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:18.464 [2024-11-25 14:14:23.469091] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:18.464 [2024-11-25 14:14:23.469097] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:18.464 [2024-11-25 14:14:23.469101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:18.464 [2024-11-25 14:14:23.470359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.464 [2024-11-25 14:14:23.470615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.464 [2024-11-25 14:14:23.470767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.464 [2024-11-25 14:14:23.470768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:19.404 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.404 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:19.405 [2024-11-25 14:14:24.193364] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:19.405 Malloc1 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:19.405 [2024-11-25 14:14:24.317606] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:19.405 { 00:16:19.405 "name": "Malloc1", 00:16:19.405 "aliases": [ 00:16:19.405 "83f2b848-2764-4e65-bb1b-09e1974fe94f" 00:16:19.405 ], 00:16:19.405 "product_name": "Malloc disk", 00:16:19.405 "block_size": 512, 00:16:19.405 "num_blocks": 1048576, 00:16:19.405 "uuid": "83f2b848-2764-4e65-bb1b-09e1974fe94f", 00:16:19.405 "assigned_rate_limits": { 00:16:19.405 "rw_ios_per_sec": 0, 00:16:19.405 "rw_mbytes_per_sec": 0, 00:16:19.405 "r_mbytes_per_sec": 0, 00:16:19.405 "w_mbytes_per_sec": 0 00:16:19.405 }, 00:16:19.405 "claimed": true, 00:16:19.405 "claim_type": "exclusive_write", 00:16:19.405 "zoned": false, 00:16:19.405 "supported_io_types": { 00:16:19.405 "read": true, 00:16:19.405 "write": true, 00:16:19.405 "unmap": true, 00:16:19.405 "flush": true, 00:16:19.405 "reset": true, 00:16:19.405 "nvme_admin": false, 00:16:19.405 "nvme_io": false, 00:16:19.405 "nvme_io_md": false, 00:16:19.405 "write_zeroes": true, 00:16:19.405 "zcopy": true, 00:16:19.405 "get_zone_info": false, 00:16:19.405 "zone_management": false, 00:16:19.405 "zone_append": false, 00:16:19.405 "compare": false, 00:16:19.405 "compare_and_write": false, 00:16:19.405 "abort": true, 00:16:19.405 "seek_hole": false, 00:16:19.405 "seek_data": false, 00:16:19.405 "copy": true, 00:16:19.405 "nvme_iov_md": false 00:16:19.405 }, 00:16:19.405 "memory_domains": [ 00:16:19.405 { 00:16:19.405 "dma_device_id": "system", 00:16:19.405 "dma_device_type": 1 00:16:19.405 }, 00:16:19.405 { 00:16:19.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.405 "dma_device_type": 2 00:16:19.405 } 00:16:19.405 ], 00:16:19.405 "driver_specific": {} 00:16:19.405 } 00:16:19.405 ]' 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:16:19.405 14:14:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:21.316 14:14:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:16:21.316 14:14:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:16:21.316 14:14:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:21.316 14:14:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:21.316 14:14:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:16:23.230 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:23.230 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:23.230 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:23.230 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:23.230 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:23.230 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:16:23.230 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:16:23.230 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:16:23.230 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:16:23.230 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:16:23.230 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:16:23.230 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:23.230 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:16:23.230 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:16:23.230 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:16:23.230 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:16:23.230 14:14:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:16:23.230 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:16:24.172 14:14:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:16:25.113 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:16:25.113 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:16:25.113 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:25.113 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:25.113 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:25.113 ************************************ 00:16:25.113 START TEST filesystem_in_capsule_ext4 00:16:25.113 ************************************ 00:16:25.113 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:16:25.113 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:16:25.113 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:25.113 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:16:25.113 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:16:25.113 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:25.113 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:16:25.113 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:16:25.113 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:16:25.113 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:16:25.113 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:16:25.113 mke2fs 1.47.0 (5-Feb-2023) 00:16:25.113 Discarding device blocks: 0/522240 done 00:16:25.113 Creating filesystem with 522240 1k blocks and 130560 inodes 00:16:25.113 Filesystem UUID: 7272561a-be0e-4c24-83b3-c305e1b6511d 00:16:25.113 Superblock backups stored on blocks: 00:16:25.113 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:16:25.113 00:16:25.113 Allocating group tables: 0/64 done 00:16:25.113 Writing inode tables: 0/64 done 00:16:25.113 Creating journal (8192 blocks): done 00:16:25.373 Writing superblocks and filesystem accounting information: 0/64 done 00:16:25.373 00:16:25.373 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:16:25.373 14:14:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:30.659 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:30.659 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:16:30.659 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:30.659 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:16:30.659 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:16:30.659 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:30.659 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3317654 00:16:30.659 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:30.659 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:30.659 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:30.659 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:30.659 00:16:30.659 real 0m5.657s 00:16:30.659 user 0m0.033s 00:16:30.659 sys 0m0.070s 00:16:30.659 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.659 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:16:30.659 ************************************ 00:16:30.659 END TEST filesystem_in_capsule_ext4 00:16:30.659 ************************************ 00:16:30.920 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:16:30.920 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:30.920 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:30.920 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:30.920 ************************************ 00:16:30.920 START TEST filesystem_in_capsule_btrfs 00:16:30.920 ************************************ 00:16:30.920 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:16:30.920 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:16:30.920 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:30.920 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:16:30.920 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:16:30.920 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:30.920 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:16:30.920 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:16:30.920 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:16:30.920 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:16:30.920 14:14:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:16:31.182 btrfs-progs v6.8.1 00:16:31.182 See https://btrfs.readthedocs.io for more information. 00:16:31.182 00:16:31.182 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:16:31.182 NOTE: several default settings have changed in version 5.15, please make sure 00:16:31.182 this does not affect your deployments: 00:16:31.182 - DUP for metadata (-m dup) 00:16:31.182 - enabled no-holes (-O no-holes) 00:16:31.182 - enabled free-space-tree (-R free-space-tree) 00:16:31.182 00:16:31.182 Label: (null) 00:16:31.182 UUID: f1f83037-f7d6-4331-9f12-bb827dfad220 00:16:31.182 Node size: 16384 00:16:31.182 Sector size: 4096 (CPU page size: 4096) 00:16:31.182 Filesystem size: 510.00MiB 00:16:31.182 Block group profiles: 00:16:31.182 Data: single 8.00MiB 00:16:31.182 Metadata: DUP 32.00MiB 00:16:31.182 System: DUP 8.00MiB 00:16:31.182 SSD detected: yes 00:16:31.182 Zoned device: no 00:16:31.182 Features: extref, skinny-metadata, no-holes, free-space-tree 00:16:31.182 Checksum: crc32c 00:16:31.182 Number of devices: 1 00:16:31.182 Devices: 00:16:31.182 ID SIZE PATH 00:16:31.182 1 510.00MiB /dev/nvme0n1p1 00:16:31.182 00:16:31.182 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:16:31.182 14:14:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:32.123 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:32.123 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:16:32.123 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:32.123 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:16:32.123 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:16:32.123 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:32.123 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3317654 00:16:32.123 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:32.123 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:32.123 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:32.123 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:32.123 00:16:32.123 real 0m1.357s 00:16:32.123 user 0m0.017s 00:16:32.123 sys 0m0.133s 00:16:32.123 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:32.123 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:16:32.123 ************************************ 00:16:32.123 END TEST filesystem_in_capsule_btrfs 00:16:32.123 ************************************ 00:16:32.123 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:16:32.123 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:32.123 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:32.123 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:32.384 ************************************ 00:16:32.384 START TEST filesystem_in_capsule_xfs 00:16:32.384 ************************************ 00:16:32.384 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:16:32.384 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:16:32.384 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:32.384 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:16:32.384 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:16:32.384 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:32.384 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:16:32.384 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:16:32.384 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:16:32.384 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:16:32.384 14:14:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:16:32.384 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:16:32.384 = sectsz=512 attr=2, projid32bit=1 00:16:32.384 = crc=1 finobt=1, sparse=1, rmapbt=0 00:16:32.384 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:16:32.384 data = bsize=4096 blocks=130560, imaxpct=25 00:16:32.384 = sunit=0 swidth=0 blks 00:16:32.384 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:16:32.384 log =internal log bsize=4096 blocks=16384, version=2 00:16:32.384 = sectsz=512 sunit=0 blks, lazy-count=1 00:16:32.384 realtime =none extsz=4096 blocks=0, rtextents=0 00:16:33.330 Discarding blocks...Done. 00:16:33.330 14:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:16:33.330 14:14:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:35.257 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:35.257 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:16:35.257 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:35.257 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:16:35.257 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:16:35.257 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:35.257 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3317654 00:16:35.257 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:35.257 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:35.257 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:35.257 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:35.257 00:16:35.257 real 0m2.870s 00:16:35.257 user 0m0.022s 00:16:35.257 sys 0m0.081s 00:16:35.257 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:35.257 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:16:35.257 ************************************ 00:16:35.257 END TEST filesystem_in_capsule_xfs 00:16:35.257 ************************************ 00:16:35.257 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:16:35.517 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:16:35.517 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:35.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.517 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:35.517 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:16:35.517 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:35.517 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:35.517 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:35.517 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:35.517 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:16:35.517 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:35.517 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.517 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:35.517 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.517 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:35.517 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3317654 00:16:35.517 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3317654 ']' 00:16:35.517 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3317654 00:16:35.517 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:16:35.779 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:35.779 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3317654 00:16:35.779 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:35.779 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:35.779 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3317654' 00:16:35.779 killing process with pid 3317654 00:16:35.779 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3317654 00:16:35.779 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3317654 00:16:35.779 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:16:35.779 00:16:35.779 real 0m17.587s 00:16:35.779 user 1m9.546s 00:16:35.779 sys 0m1.405s 00:16:35.779 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:35.779 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:35.779 ************************************ 00:16:35.779 END TEST nvmf_filesystem_in_capsule 00:16:35.779 ************************************ 00:16:36.040 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:16:36.040 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:36.040 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:16:36.040 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:36.040 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:16:36.040 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:36.040 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:36.040 rmmod nvme_tcp 00:16:36.040 rmmod nvme_fabrics 00:16:36.040 rmmod nvme_keyring 00:16:36.040 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:36.040 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:16:36.040 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:16:36.040 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:16:36.040 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:36.040 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:36.040 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:36.040 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:16:36.040 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:16:36.040 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:36.040 14:14:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:16:36.040 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:36.040 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:36.040 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.040 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:36.040 14:14:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.586 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:38.586 00:16:38.586 real 0m49.772s 00:16:38.586 user 2m38.242s 00:16:38.586 sys 0m8.841s 00:16:38.586 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.586 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:38.586 ************************************ 00:16:38.586 END TEST nvmf_filesystem 00:16:38.586 ************************************ 00:16:38.586 14:14:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:16:38.586 14:14:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:38.586 14:14:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:38.586 14:14:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:38.586 ************************************ 00:16:38.586 START TEST nvmf_target_discovery 00:16:38.586 ************************************ 00:16:38.586 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:16:38.586 * Looking for test storage... 00:16:38.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:38.586 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:38.586 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:16:38.586 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:38.586 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:38.586 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:38.586 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:38.586 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:38.586 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:38.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.587 --rc genhtml_branch_coverage=1 00:16:38.587 --rc genhtml_function_coverage=1 00:16:38.587 --rc genhtml_legend=1 00:16:38.587 --rc geninfo_all_blocks=1 00:16:38.587 --rc geninfo_unexecuted_blocks=1 00:16:38.587 00:16:38.587 ' 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:38.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.587 --rc genhtml_branch_coverage=1 00:16:38.587 --rc genhtml_function_coverage=1 00:16:38.587 --rc genhtml_legend=1 00:16:38.587 --rc geninfo_all_blocks=1 00:16:38.587 --rc geninfo_unexecuted_blocks=1 00:16:38.587 00:16:38.587 ' 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:38.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.587 --rc genhtml_branch_coverage=1 00:16:38.587 --rc genhtml_function_coverage=1 00:16:38.587 --rc genhtml_legend=1 00:16:38.587 --rc geninfo_all_blocks=1 00:16:38.587 --rc geninfo_unexecuted_blocks=1 00:16:38.587 00:16:38.587 ' 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:38.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:38.587 --rc genhtml_branch_coverage=1 00:16:38.587 --rc genhtml_function_coverage=1 00:16:38.587 --rc genhtml_legend=1 00:16:38.587 --rc geninfo_all_blocks=1 00:16:38.587 --rc geninfo_unexecuted_blocks=1 00:16:38.587 00:16:38.587 ' 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:38.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:38.587 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:16:38.588 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:16:38.588 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:16:38.588 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:16:38.588 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:16:38.588 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:38.588 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:38.588 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:38.588 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:38.588 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:38.588 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.588 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:38.588 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:38.588 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:38.588 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:38.588 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:16:38.588 14:14:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:46.734 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:46.734 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:46.734 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:46.734 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:46.735 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:46.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:46.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:16:46.735 00:16:46.735 --- 10.0.0.2 ping statistics --- 00:16:46.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.735 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:46.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:46.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:16:46.735 00:16:46.735 --- 10.0.0.1 ping statistics --- 00:16:46.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.735 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3325564 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3325564 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3325564 ']' 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.735 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:46.735 [2024-11-25 14:14:51.023419] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:16:46.735 [2024-11-25 14:14:51.023484] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.735 [2024-11-25 14:14:51.124840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:46.735 [2024-11-25 14:14:51.177880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.735 [2024-11-25 14:14:51.177939] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.735 [2024-11-25 14:14:51.177947] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.735 [2024-11-25 14:14:51.177955] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.735 [2024-11-25 14:14:51.177961] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.735 [2024-11-25 14:14:51.180223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.735 [2024-11-25 14:14:51.180462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.735 [2024-11-25 14:14:51.180462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:46.735 [2024-11-25 14:14:51.180297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:46.999 [2024-11-25 14:14:51.900861] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:46.999 Null1 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:46.999 [2024-11-25 14:14:51.961428] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.999 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.000 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:47.000 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:16:47.000 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.000 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.000 Null2 00:16:47.000 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.000 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:47.000 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.000 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.000 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.000 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:16:47.000 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.000 14:14:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.000 Null3 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.000 Null4 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.000 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.262 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.262 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:16:47.262 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.262 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.262 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.262 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:16:47.262 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.262 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.262 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.262 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:47.262 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.262 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.262 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.262 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:16:47.262 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.262 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.262 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.262 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:16:47.262 00:16:47.262 Discovery Log Number of Records 6, Generation counter 6 00:16:47.262 =====Discovery Log Entry 0====== 00:16:47.262 trtype: tcp 00:16:47.262 adrfam: ipv4 00:16:47.262 subtype: current discovery subsystem 00:16:47.262 treq: not required 00:16:47.262 portid: 0 00:16:47.262 trsvcid: 4420 00:16:47.262 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:47.262 traddr: 10.0.0.2 00:16:47.262 eflags: explicit discovery connections, duplicate discovery information 00:16:47.262 sectype: none 00:16:47.262 =====Discovery Log Entry 1====== 00:16:47.262 trtype: tcp 00:16:47.262 adrfam: ipv4 00:16:47.262 subtype: nvme subsystem 00:16:47.262 treq: not required 00:16:47.262 portid: 0 00:16:47.262 trsvcid: 4420 00:16:47.262 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:47.262 traddr: 10.0.0.2 00:16:47.262 eflags: none 00:16:47.262 sectype: none 00:16:47.262 =====Discovery Log Entry 2====== 00:16:47.262 trtype: tcp 00:16:47.262 adrfam: ipv4 00:16:47.262 subtype: nvme subsystem 00:16:47.262 treq: not required 00:16:47.262 portid: 0 00:16:47.262 trsvcid: 4420 00:16:47.262 subnqn: nqn.2016-06.io.spdk:cnode2 00:16:47.262 traddr: 10.0.0.2 00:16:47.262 eflags: none 00:16:47.262 sectype: none 00:16:47.262 =====Discovery Log Entry 3====== 00:16:47.262 trtype: tcp 00:16:47.262 adrfam: ipv4 00:16:47.262 subtype: nvme subsystem 00:16:47.262 treq: not required 00:16:47.262 portid: 0 00:16:47.262 trsvcid: 4420 00:16:47.262 subnqn: nqn.2016-06.io.spdk:cnode3 00:16:47.262 traddr: 10.0.0.2 00:16:47.262 eflags: none 00:16:47.262 sectype: none 00:16:47.262 =====Discovery Log Entry 4====== 00:16:47.262 trtype: tcp 00:16:47.262 adrfam: ipv4 00:16:47.262 subtype: nvme subsystem 00:16:47.262 treq: not required 00:16:47.262 portid: 0 00:16:47.262 trsvcid: 4420 00:16:47.262 subnqn: nqn.2016-06.io.spdk:cnode4 00:16:47.262 traddr: 10.0.0.2 00:16:47.262 eflags: none 00:16:47.262 sectype: none 00:16:47.262 =====Discovery Log Entry 5====== 00:16:47.262 trtype: tcp 00:16:47.262 adrfam: ipv4 00:16:47.262 subtype: discovery subsystem referral 00:16:47.262 treq: not required 00:16:47.262 portid: 0 00:16:47.262 trsvcid: 4430 00:16:47.262 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:47.262 traddr: 10.0.0.2 00:16:47.262 eflags: none 00:16:47.262 sectype: none 00:16:47.262 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:16:47.262 Perform nvmf subsystem discovery via RPC 00:16:47.262 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:16:47.262 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.262 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.262 [ 00:16:47.262 { 00:16:47.262 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:47.262 "subtype": "Discovery", 00:16:47.262 "listen_addresses": [ 00:16:47.262 { 00:16:47.262 "trtype": "TCP", 00:16:47.262 "adrfam": "IPv4", 00:16:47.262 "traddr": "10.0.0.2", 00:16:47.262 "trsvcid": "4420" 00:16:47.262 } 00:16:47.262 ], 00:16:47.262 "allow_any_host": true, 00:16:47.262 "hosts": [] 00:16:47.262 }, 00:16:47.262 { 00:16:47.262 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:47.262 "subtype": "NVMe", 00:16:47.262 "listen_addresses": [ 00:16:47.262 { 00:16:47.262 "trtype": "TCP", 00:16:47.262 "adrfam": "IPv4", 00:16:47.262 "traddr": "10.0.0.2", 00:16:47.262 "trsvcid": "4420" 00:16:47.262 } 00:16:47.262 ], 00:16:47.262 "allow_any_host": true, 00:16:47.262 "hosts": [], 00:16:47.262 "serial_number": "SPDK00000000000001", 00:16:47.262 "model_number": "SPDK bdev Controller", 00:16:47.262 "max_namespaces": 32, 00:16:47.262 "min_cntlid": 1, 00:16:47.262 "max_cntlid": 65519, 00:16:47.262 "namespaces": [ 00:16:47.262 { 00:16:47.262 "nsid": 1, 00:16:47.262 "bdev_name": "Null1", 00:16:47.262 "name": "Null1", 00:16:47.262 "nguid": "5A0137BBE1D74C58B9264A840BB6D408", 00:16:47.262 "uuid": "5a0137bb-e1d7-4c58-b926-4a840bb6d408" 00:16:47.262 } 00:16:47.262 ] 00:16:47.262 }, 00:16:47.262 { 00:16:47.262 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:47.262 "subtype": "NVMe", 00:16:47.262 "listen_addresses": [ 00:16:47.262 { 00:16:47.262 "trtype": "TCP", 00:16:47.262 "adrfam": "IPv4", 00:16:47.262 "traddr": "10.0.0.2", 00:16:47.262 "trsvcid": "4420" 00:16:47.262 } 00:16:47.262 ], 00:16:47.262 "allow_any_host": true, 00:16:47.262 "hosts": [], 00:16:47.262 "serial_number": "SPDK00000000000002", 00:16:47.262 "model_number": "SPDK bdev Controller", 00:16:47.262 "max_namespaces": 32, 00:16:47.262 "min_cntlid": 1, 00:16:47.262 "max_cntlid": 65519, 00:16:47.262 "namespaces": [ 00:16:47.262 { 00:16:47.262 "nsid": 1, 00:16:47.262 "bdev_name": "Null2", 00:16:47.262 "name": "Null2", 00:16:47.263 "nguid": "D7FC87EE8D9C4974A6948EA069D4CBA8", 00:16:47.263 "uuid": "d7fc87ee-8d9c-4974-a694-8ea069d4cba8" 00:16:47.263 } 00:16:47.263 ] 00:16:47.263 }, 00:16:47.263 { 00:16:47.263 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:16:47.263 "subtype": "NVMe", 00:16:47.263 "listen_addresses": [ 00:16:47.263 { 00:16:47.263 "trtype": "TCP", 00:16:47.263 "adrfam": "IPv4", 00:16:47.263 "traddr": "10.0.0.2", 00:16:47.263 "trsvcid": "4420" 00:16:47.263 } 00:16:47.263 ], 00:16:47.263 "allow_any_host": true, 00:16:47.263 "hosts": [], 00:16:47.263 "serial_number": "SPDK00000000000003", 00:16:47.263 "model_number": "SPDK bdev Controller", 00:16:47.263 "max_namespaces": 32, 00:16:47.263 "min_cntlid": 1, 00:16:47.263 "max_cntlid": 65519, 00:16:47.263 "namespaces": [ 00:16:47.263 { 00:16:47.263 "nsid": 1, 00:16:47.263 "bdev_name": "Null3", 00:16:47.263 "name": "Null3", 00:16:47.263 "nguid": "E3C08FFEFE7D4F07B67F78B116D65D5A", 00:16:47.263 "uuid": "e3c08ffe-fe7d-4f07-b67f-78b116d65d5a" 00:16:47.263 } 00:16:47.263 ] 00:16:47.263 }, 00:16:47.263 { 00:16:47.263 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:16:47.263 "subtype": "NVMe", 00:16:47.263 "listen_addresses": [ 00:16:47.263 { 00:16:47.263 "trtype": "TCP", 00:16:47.263 "adrfam": "IPv4", 00:16:47.263 "traddr": "10.0.0.2", 00:16:47.263 "trsvcid": "4420" 00:16:47.263 } 00:16:47.263 ], 00:16:47.263 "allow_any_host": true, 00:16:47.263 "hosts": [], 00:16:47.263 "serial_number": "SPDK00000000000004", 00:16:47.263 "model_number": "SPDK bdev Controller", 00:16:47.263 "max_namespaces": 32, 00:16:47.263 "min_cntlid": 1, 00:16:47.263 "max_cntlid": 65519, 00:16:47.263 "namespaces": [ 00:16:47.263 { 00:16:47.263 "nsid": 1, 00:16:47.263 "bdev_name": "Null4", 00:16:47.263 "name": "Null4", 00:16:47.263 "nguid": "6948D2DB7E1342B4AAB56D8A65C2283D", 00:16:47.263 "uuid": "6948d2db-7e13-42b4-aab5-6d8a65c2283d" 00:16:47.263 } 00:16:47.263 ] 00:16:47.263 } 00:16:47.263 ] 00:16:47.263 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.263 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:16:47.263 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:47.263 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:47.263 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.263 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.263 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.263 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:16:47.263 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.263 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.263 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.263 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:47.263 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:47.263 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.263 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.263 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.263 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:16:47.263 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.263 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:47.527 rmmod nvme_tcp 00:16:47.527 rmmod nvme_fabrics 00:16:47.527 rmmod nvme_keyring 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3325564 ']' 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3325564 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3325564 ']' 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3325564 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.527 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3325564 00:16:47.789 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:47.789 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:47.789 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3325564' 00:16:47.789 killing process with pid 3325564 00:16:47.789 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3325564 00:16:47.789 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3325564 00:16:47.789 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:47.789 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:47.789 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:47.789 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:16:47.789 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:16:47.789 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:16:47.789 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:47.789 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:47.789 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:47.789 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.789 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:47.789 14:14:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.336 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:50.336 00:16:50.336 real 0m11.683s 00:16:50.336 user 0m8.691s 00:16:50.336 sys 0m6.201s 00:16:50.336 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:50.336 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.336 ************************************ 00:16:50.336 END TEST nvmf_target_discovery 00:16:50.336 ************************************ 00:16:50.337 14:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:16:50.337 14:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:50.337 14:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.337 14:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:50.337 ************************************ 00:16:50.337 START TEST nvmf_referrals 00:16:50.337 ************************************ 00:16:50.337 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:16:50.337 * Looking for test storage... 00:16:50.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:50.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.337 --rc genhtml_branch_coverage=1 00:16:50.337 --rc genhtml_function_coverage=1 00:16:50.337 --rc genhtml_legend=1 00:16:50.337 --rc geninfo_all_blocks=1 00:16:50.337 --rc geninfo_unexecuted_blocks=1 00:16:50.337 00:16:50.337 ' 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:50.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.337 --rc genhtml_branch_coverage=1 00:16:50.337 --rc genhtml_function_coverage=1 00:16:50.337 --rc genhtml_legend=1 00:16:50.337 --rc geninfo_all_blocks=1 00:16:50.337 --rc geninfo_unexecuted_blocks=1 00:16:50.337 00:16:50.337 ' 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:50.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.337 --rc genhtml_branch_coverage=1 00:16:50.337 --rc genhtml_function_coverage=1 00:16:50.337 --rc genhtml_legend=1 00:16:50.337 --rc geninfo_all_blocks=1 00:16:50.337 --rc geninfo_unexecuted_blocks=1 00:16:50.337 00:16:50.337 ' 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:50.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.337 --rc genhtml_branch_coverage=1 00:16:50.337 --rc genhtml_function_coverage=1 00:16:50.337 --rc genhtml_legend=1 00:16:50.337 --rc geninfo_all_blocks=1 00:16:50.337 --rc geninfo_unexecuted_blocks=1 00:16:50.337 00:16:50.337 ' 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.337 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:50.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:16:50.338 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:58.482 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:58.482 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:58.482 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:58.482 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:58.482 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:58.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:58.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.580 ms 00:16:58.483 00:16:58.483 --- 10.0.0.2 ping statistics --- 00:16:58.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.483 rtt min/avg/max/mdev = 0.580/0.580/0.580/0.000 ms 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:58.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:58.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:16:58.483 00:16:58.483 --- 10.0.0.1 ping statistics --- 00:16:58.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.483 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3330268 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3330268 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3330268 ']' 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.483 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:58.483 [2024-11-25 14:15:02.846197] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:16:58.483 [2024-11-25 14:15:02.846267] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.483 [2024-11-25 14:15:02.946954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:58.483 [2024-11-25 14:15:03.000657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.483 [2024-11-25 14:15:03.000713] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.483 [2024-11-25 14:15:03.000722] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:58.483 [2024-11-25 14:15:03.000729] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:58.483 [2024-11-25 14:15:03.000735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.483 [2024-11-25 14:15:03.003185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.483 [2024-11-25 14:15:03.003329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.483 [2024-11-25 14:15:03.003571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.483 [2024-11-25 14:15:03.003571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:58.743 [2024-11-25 14:15:03.727863] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:58.743 [2024-11-25 14:15:03.744232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.743 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:16:59.003 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:16:59.003 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:59.003 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:59.003 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:59.003 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.003 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:59.003 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:59.003 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.003 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:59.003 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:59.003 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:16:59.003 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:59.003 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:59.003 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:59.003 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:59.003 14:15:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:59.003 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:59.003 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:59.003 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:16:59.003 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.003 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:59.003 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.003 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:16:59.003 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.003 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:59.003 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.003 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:16:59.003 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.004 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:59.004 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.004 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:59.004 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:16:59.004 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.004 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:59.004 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.264 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:16:59.264 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:16:59.264 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:59.264 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:59.264 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:59.264 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:59.264 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:59.264 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:59.264 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:16:59.264 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:16:59.264 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.264 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:59.528 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.528 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:59.528 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.528 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:59.528 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.528 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:16:59.528 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:59.528 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:59.528 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:59.528 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.528 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:59.528 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:59.528 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.528 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:16:59.528 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:59.528 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:16:59.528 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:59.528 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:59.528 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:59.528 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:59.528 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:59.528 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:16:59.528 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:59.790 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:16:59.790 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:16:59.790 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:59.790 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:59.790 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:59.790 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:16:59.790 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:16:59.790 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:16:59.790 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:59.790 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:59.790 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:17:00.051 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:17:00.051 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:17:00.051 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.051 14:15:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:00.051 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.051 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:17:00.051 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:17:00.051 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:00.051 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:17:00.051 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.051 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:00.051 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:17:00.051 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.051 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:17:00.051 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:17:00.051 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:17:00.051 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:00.051 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:00.051 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:00.051 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:00.051 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:00.311 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:17:00.312 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:17:00.312 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:17:00.312 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:17:00.312 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:17:00.312 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:00.312 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:17:00.571 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:17:00.571 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:17:00.571 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:17:00.571 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:17:00.571 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:00.571 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:17:00.571 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:17:00.571 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:17:00.571 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.572 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:00.572 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.572 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:17:00.572 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:17:00.572 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.572 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:00.572 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.833 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:17:00.833 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:17:00.833 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:17:00.833 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:17:00.833 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:17:00.833 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:17:00.833 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:17:00.833 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:17:00.833 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:17:00.833 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:17:00.833 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:17:00.833 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:00.833 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:17:01.095 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:01.095 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:17:01.095 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:01.095 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:01.095 rmmod nvme_tcp 00:17:01.095 rmmod nvme_fabrics 00:17:01.095 rmmod nvme_keyring 00:17:01.095 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:01.095 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:17:01.095 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:17:01.095 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3330268 ']' 00:17:01.095 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3330268 00:17:01.095 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3330268 ']' 00:17:01.095 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3330268 00:17:01.095 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:17:01.095 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:01.095 14:15:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3330268 00:17:01.095 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:01.095 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:01.095 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3330268' 00:17:01.095 killing process with pid 3330268 00:17:01.095 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3330268 00:17:01.095 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3330268 00:17:01.095 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:01.095 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:01.095 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:01.095 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:17:01.095 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:17:01.095 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:01.095 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:17:01.095 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:01.095 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:01.095 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.095 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.095 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.639 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:03.639 00:17:03.639 real 0m13.318s 00:17:03.639 user 0m15.725s 00:17:03.639 sys 0m6.718s 00:17:03.639 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.639 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:03.640 ************************************ 00:17:03.640 END TEST nvmf_referrals 00:17:03.640 ************************************ 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:03.640 ************************************ 00:17:03.640 START TEST nvmf_connect_disconnect 00:17:03.640 ************************************ 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:17:03.640 * Looking for test storage... 00:17:03.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:03.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.640 --rc genhtml_branch_coverage=1 00:17:03.640 --rc genhtml_function_coverage=1 00:17:03.640 --rc genhtml_legend=1 00:17:03.640 --rc geninfo_all_blocks=1 00:17:03.640 --rc geninfo_unexecuted_blocks=1 00:17:03.640 00:17:03.640 ' 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:03.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.640 --rc genhtml_branch_coverage=1 00:17:03.640 --rc genhtml_function_coverage=1 00:17:03.640 --rc genhtml_legend=1 00:17:03.640 --rc geninfo_all_blocks=1 00:17:03.640 --rc geninfo_unexecuted_blocks=1 00:17:03.640 00:17:03.640 ' 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:03.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.640 --rc genhtml_branch_coverage=1 00:17:03.640 --rc genhtml_function_coverage=1 00:17:03.640 --rc genhtml_legend=1 00:17:03.640 --rc geninfo_all_blocks=1 00:17:03.640 --rc geninfo_unexecuted_blocks=1 00:17:03.640 00:17:03.640 ' 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:03.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.640 --rc genhtml_branch_coverage=1 00:17:03.640 --rc genhtml_function_coverage=1 00:17:03.640 --rc genhtml_legend=1 00:17:03.640 --rc geninfo_all_blocks=1 00:17:03.640 --rc geninfo_unexecuted_blocks=1 00:17:03.640 00:17:03.640 ' 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.640 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:03.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:17:03.641 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:11.782 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:11.782 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:17:11.782 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:11.782 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:11.782 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:11.782 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:11.782 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:11.782 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:17:11.782 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:11.782 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:17:11.782 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:17:11.782 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:17:11.782 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:17:11.782 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:17:11.782 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:17:11.782 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:11.782 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:11.782 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:11.782 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:11.782 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:11.782 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:11.782 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:11.783 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:11.783 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:11.783 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:11.783 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:11.783 14:15:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:11.783 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:11.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:11.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:17:11.783 00:17:11.783 --- 10.0.0.2 ping statistics --- 00:17:11.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.783 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:17:11.783 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:11.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:11.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:17:11.783 00:17:11.783 --- 10.0.0.1 ping statistics --- 00:17:11.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.783 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:17:11.783 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:11.783 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:17:11.783 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:11.783 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:11.783 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:11.783 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:11.783 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:11.783 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:11.783 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:11.783 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:17:11.783 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:11.783 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:11.783 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:11.783 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3335605 00:17:11.783 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3335605 00:17:11.783 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:11.783 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3335605 ']' 00:17:11.783 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.783 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:11.783 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.783 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:11.783 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:11.783 [2024-11-25 14:15:16.126407] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:17:11.783 [2024-11-25 14:15:16.126474] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.783 [2024-11-25 14:15:16.224877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:11.783 [2024-11-25 14:15:16.278077] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.783 [2024-11-25 14:15:16.278133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.783 [2024-11-25 14:15:16.278141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:11.783 [2024-11-25 14:15:16.278148] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:11.783 [2024-11-25 14:15:16.278155] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.783 [2024-11-25 14:15:16.280216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.783 [2024-11-25 14:15:16.280324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.783 [2024-11-25 14:15:16.280485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:11.783 [2024-11-25 14:15:16.280487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.044 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:12.044 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:17:12.044 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:12.044 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:12.044 14:15:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:12.045 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.045 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:17:12.045 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.045 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:12.045 [2024-11-25 14:15:17.008380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:12.045 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.045 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:17:12.045 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.045 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:12.045 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.045 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:17:12.045 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:12.045 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.045 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:12.045 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.045 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:12.045 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.045 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:12.045 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.045 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:12.045 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.045 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:12.045 [2024-11-25 14:15:17.089975] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.045 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.045 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:17:12.045 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:17:12.045 14:15:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:17:16.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:19.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:23.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:30.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:30.562 rmmod nvme_tcp 00:17:30.562 rmmod nvme_fabrics 00:17:30.562 rmmod nvme_keyring 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3335605 ']' 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3335605 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3335605 ']' 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3335605 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3335605 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3335605' 00:17:30.562 killing process with pid 3335605 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3335605 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3335605 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:30.562 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:17:30.824 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:30.824 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:30.824 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.824 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.824 14:15:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.738 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:32.738 00:17:32.738 real 0m29.386s 00:17:32.738 user 1m19.247s 00:17:32.738 sys 0m7.182s 00:17:32.738 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.738 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:32.738 ************************************ 00:17:32.738 END TEST nvmf_connect_disconnect 00:17:32.738 ************************************ 00:17:32.738 14:15:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:32.738 14:15:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:32.738 14:15:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.738 14:15:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:32.738 ************************************ 00:17:32.738 START TEST nvmf_multitarget 00:17:32.738 ************************************ 00:17:32.738 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:33.000 * Looking for test storage... 00:17:33.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:33.000 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:33.000 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:17:33.000 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:33.000 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:33.000 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:33.000 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:33.000 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:33.000 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:17:33.000 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:17:33.000 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:17:33.000 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:17:33.000 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:17:33.000 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:17:33.000 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:17:33.000 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:33.000 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:17:33.000 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:17:33.000 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:33.000 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:33.000 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:17:33.000 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:17:33.000 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:33.000 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:17:33.000 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:17:33.000 14:15:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:33.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.000 --rc genhtml_branch_coverage=1 00:17:33.000 --rc genhtml_function_coverage=1 00:17:33.000 --rc genhtml_legend=1 00:17:33.000 --rc geninfo_all_blocks=1 00:17:33.000 --rc geninfo_unexecuted_blocks=1 00:17:33.000 00:17:33.000 ' 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:33.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.000 --rc genhtml_branch_coverage=1 00:17:33.000 --rc genhtml_function_coverage=1 00:17:33.000 --rc genhtml_legend=1 00:17:33.000 --rc geninfo_all_blocks=1 00:17:33.000 --rc geninfo_unexecuted_blocks=1 00:17:33.000 00:17:33.000 ' 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:33.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.000 --rc genhtml_branch_coverage=1 00:17:33.000 --rc genhtml_function_coverage=1 00:17:33.000 --rc genhtml_legend=1 00:17:33.000 --rc geninfo_all_blocks=1 00:17:33.000 --rc geninfo_unexecuted_blocks=1 00:17:33.000 00:17:33.000 ' 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:33.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.000 --rc genhtml_branch_coverage=1 00:17:33.000 --rc genhtml_function_coverage=1 00:17:33.000 --rc genhtml_legend=1 00:17:33.000 --rc geninfo_all_blocks=1 00:17:33.000 --rc geninfo_unexecuted_blocks=1 00:17:33.000 00:17:33.000 ' 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.000 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:33.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:17:33.001 14:15:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:41.146 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:41.146 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:41.146 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:41.146 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:41.146 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:41.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:41.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:41.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:41.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.532 ms 00:17:41.147 00:17:41.147 --- 10.0.0.2 ping statistics --- 00:17:41.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.147 rtt min/avg/max/mdev = 0.532/0.532/0.532/0.000 ms 00:17:41.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:41.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:41.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:17:41.147 00:17:41.147 --- 10.0.0.1 ping statistics --- 00:17:41.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.147 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:17:41.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:41.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:17:41.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:41.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:41.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:41.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:41.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:41.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:41.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:41.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:41.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:41.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:41.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:41.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3343729 00:17:41.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3343729 00:17:41.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:41.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3343729 ']' 00:17:41.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:41.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:41.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:41.147 [2024-11-25 14:15:45.656434] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:17:41.147 [2024-11-25 14:15:45.656500] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.147 [2024-11-25 14:15:45.754273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:41.147 [2024-11-25 14:15:45.807005] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.147 [2024-11-25 14:15:45.807058] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.147 [2024-11-25 14:15:45.807067] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.147 [2024-11-25 14:15:45.807075] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.147 [2024-11-25 14:15:45.807081] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.147 [2024-11-25 14:15:45.809143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.147 [2024-11-25 14:15:45.809304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.147 [2024-11-25 14:15:45.809570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:41.147 [2024-11-25 14:15:45.809572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.409 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.409 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:17:41.409 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:41.409 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:41.409 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:41.670 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.670 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:41.670 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:41.670 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:41.670 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:41.670 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:41.670 "nvmf_tgt_1" 00:17:41.932 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:41.932 "nvmf_tgt_2" 00:17:41.932 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:41.932 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:41.932 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:41.932 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:42.193 true 00:17:42.193 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:42.193 true 00:17:42.193 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:42.193 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:42.454 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:42.454 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:42.454 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:42.454 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:42.454 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:42.454 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:42.454 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:42.454 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:42.454 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:42.454 rmmod nvme_tcp 00:17:42.454 rmmod nvme_fabrics 00:17:42.454 rmmod nvme_keyring 00:17:42.454 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:42.454 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:42.454 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:42.454 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3343729 ']' 00:17:42.454 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3343729 00:17:42.454 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3343729 ']' 00:17:42.454 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3343729 00:17:42.454 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:17:42.454 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:42.454 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3343729 00:17:42.454 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:42.454 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:42.454 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3343729' 00:17:42.454 killing process with pid 3343729 00:17:42.454 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3343729 00:17:42.454 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3343729 00:17:42.716 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:42.716 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:42.716 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:42.716 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:17:42.716 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:17:42.716 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:42.717 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:17:42.717 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:42.717 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:42.717 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.717 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:42.717 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.634 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:44.895 00:17:44.895 real 0m11.921s 00:17:44.895 user 0m10.319s 00:17:44.895 sys 0m6.272s 00:17:44.895 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.895 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:44.895 ************************************ 00:17:44.895 END TEST nvmf_multitarget 00:17:44.895 ************************************ 00:17:44.895 14:15:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:44.895 14:15:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:44.895 14:15:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.895 14:15:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:44.895 ************************************ 00:17:44.895 START TEST nvmf_rpc 00:17:44.895 ************************************ 00:17:44.895 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:44.895 * Looking for test storage... 00:17:44.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:44.895 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:44.895 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:17:44.895 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:45.157 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:45.157 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:45.157 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:45.157 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:45.157 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:45.157 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:45.157 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:45.157 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:45.157 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:45.157 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:45.157 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:45.157 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:45.157 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:45.157 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:45.157 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:45.157 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:45.157 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:45.157 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:45.157 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:45.157 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:45.157 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:45.157 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:45.157 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:45.157 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:45.157 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:45.157 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:45.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.158 --rc genhtml_branch_coverage=1 00:17:45.158 --rc genhtml_function_coverage=1 00:17:45.158 --rc genhtml_legend=1 00:17:45.158 --rc geninfo_all_blocks=1 00:17:45.158 --rc geninfo_unexecuted_blocks=1 00:17:45.158 00:17:45.158 ' 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:45.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.158 --rc genhtml_branch_coverage=1 00:17:45.158 --rc genhtml_function_coverage=1 00:17:45.158 --rc genhtml_legend=1 00:17:45.158 --rc geninfo_all_blocks=1 00:17:45.158 --rc geninfo_unexecuted_blocks=1 00:17:45.158 00:17:45.158 ' 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:45.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.158 --rc genhtml_branch_coverage=1 00:17:45.158 --rc genhtml_function_coverage=1 00:17:45.158 --rc genhtml_legend=1 00:17:45.158 --rc geninfo_all_blocks=1 00:17:45.158 --rc geninfo_unexecuted_blocks=1 00:17:45.158 00:17:45.158 ' 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:45.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.158 --rc genhtml_branch_coverage=1 00:17:45.158 --rc genhtml_function_coverage=1 00:17:45.158 --rc genhtml_legend=1 00:17:45.158 --rc geninfo_all_blocks=1 00:17:45.158 --rc geninfo_unexecuted_blocks=1 00:17:45.158 00:17:45.158 ' 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:45.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:45.158 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.303 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:53.303 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:53.303 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:53.303 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:53.303 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:53.304 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:53.304 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:53.304 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:53.304 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:53.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:53.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:17:53.304 00:17:53.304 --- 10.0.0.2 ping statistics --- 00:17:53.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.304 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:53.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:53.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:17:53.304 00:17:53.304 --- 10.0.0.1 ping statistics --- 00:17:53.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.304 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:53.304 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:53.305 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:53.305 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:53.305 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:53.305 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:53.305 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.305 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3348251 00:17:53.305 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3348251 00:17:53.305 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:53.305 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3348251 ']' 00:17:53.305 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.305 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.305 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.305 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.305 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.305 [2024-11-25 14:15:57.716138] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:17:53.305 [2024-11-25 14:15:57.716227] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.305 [2024-11-25 14:15:57.818525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:53.305 [2024-11-25 14:15:57.871354] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.305 [2024-11-25 14:15:57.871411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.305 [2024-11-25 14:15:57.871420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.305 [2024-11-25 14:15:57.871427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.305 [2024-11-25 14:15:57.871434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.305 [2024-11-25 14:15:57.873873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.305 [2024-11-25 14:15:57.874029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.305 [2024-11-25 14:15:57.874211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:53.305 [2024-11-25 14:15:57.874212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.567 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.567 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:53.567 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:53.567 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:53.567 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.567 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.567 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:53.567 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.567 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.567 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.567 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:53.567 "tick_rate": 2400000000, 00:17:53.567 "poll_groups": [ 00:17:53.567 { 00:17:53.567 "name": "nvmf_tgt_poll_group_000", 00:17:53.567 "admin_qpairs": 0, 00:17:53.567 "io_qpairs": 0, 00:17:53.567 "current_admin_qpairs": 0, 00:17:53.567 "current_io_qpairs": 0, 00:17:53.567 "pending_bdev_io": 0, 00:17:53.567 "completed_nvme_io": 0, 00:17:53.567 "transports": [] 00:17:53.567 }, 00:17:53.567 { 00:17:53.567 "name": "nvmf_tgt_poll_group_001", 00:17:53.567 "admin_qpairs": 0, 00:17:53.567 "io_qpairs": 0, 00:17:53.567 "current_admin_qpairs": 0, 00:17:53.567 "current_io_qpairs": 0, 00:17:53.567 "pending_bdev_io": 0, 00:17:53.567 "completed_nvme_io": 0, 00:17:53.567 "transports": [] 00:17:53.567 }, 00:17:53.567 { 00:17:53.567 "name": "nvmf_tgt_poll_group_002", 00:17:53.567 "admin_qpairs": 0, 00:17:53.567 "io_qpairs": 0, 00:17:53.567 "current_admin_qpairs": 0, 00:17:53.567 "current_io_qpairs": 0, 00:17:53.567 "pending_bdev_io": 0, 00:17:53.567 "completed_nvme_io": 0, 00:17:53.567 "transports": [] 00:17:53.567 }, 00:17:53.567 { 00:17:53.567 "name": "nvmf_tgt_poll_group_003", 00:17:53.567 "admin_qpairs": 0, 00:17:53.567 "io_qpairs": 0, 00:17:53.567 "current_admin_qpairs": 0, 00:17:53.567 "current_io_qpairs": 0, 00:17:53.567 "pending_bdev_io": 0, 00:17:53.567 "completed_nvme_io": 0, 00:17:53.567 "transports": [] 00:17:53.567 } 00:17:53.567 ] 00:17:53.567 }' 00:17:53.567 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:53.567 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:53.567 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:53.567 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:53.567 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:53.829 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:53.829 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:53.829 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:53.829 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.829 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.829 [2024-11-25 14:15:58.711015] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.829 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.829 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:53.829 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.829 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.829 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.829 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:53.829 "tick_rate": 2400000000, 00:17:53.829 "poll_groups": [ 00:17:53.829 { 00:17:53.829 "name": "nvmf_tgt_poll_group_000", 00:17:53.829 "admin_qpairs": 0, 00:17:53.829 "io_qpairs": 0, 00:17:53.829 "current_admin_qpairs": 0, 00:17:53.829 "current_io_qpairs": 0, 00:17:53.829 "pending_bdev_io": 0, 00:17:53.829 "completed_nvme_io": 0, 00:17:53.829 "transports": [ 00:17:53.829 { 00:17:53.829 "trtype": "TCP" 00:17:53.829 } 00:17:53.829 ] 00:17:53.829 }, 00:17:53.829 { 00:17:53.829 "name": "nvmf_tgt_poll_group_001", 00:17:53.829 "admin_qpairs": 0, 00:17:53.829 "io_qpairs": 0, 00:17:53.829 "current_admin_qpairs": 0, 00:17:53.829 "current_io_qpairs": 0, 00:17:53.829 "pending_bdev_io": 0, 00:17:53.829 "completed_nvme_io": 0, 00:17:53.829 "transports": [ 00:17:53.829 { 00:17:53.829 "trtype": "TCP" 00:17:53.829 } 00:17:53.830 ] 00:17:53.830 }, 00:17:53.830 { 00:17:53.830 "name": "nvmf_tgt_poll_group_002", 00:17:53.830 "admin_qpairs": 0, 00:17:53.830 "io_qpairs": 0, 00:17:53.830 "current_admin_qpairs": 0, 00:17:53.830 "current_io_qpairs": 0, 00:17:53.830 "pending_bdev_io": 0, 00:17:53.830 "completed_nvme_io": 0, 00:17:53.830 "transports": [ 00:17:53.830 { 00:17:53.830 "trtype": "TCP" 00:17:53.830 } 00:17:53.830 ] 00:17:53.830 }, 00:17:53.830 { 00:17:53.830 "name": "nvmf_tgt_poll_group_003", 00:17:53.830 "admin_qpairs": 0, 00:17:53.830 "io_qpairs": 0, 00:17:53.830 "current_admin_qpairs": 0, 00:17:53.830 "current_io_qpairs": 0, 00:17:53.830 "pending_bdev_io": 0, 00:17:53.830 "completed_nvme_io": 0, 00:17:53.830 "transports": [ 00:17:53.830 { 00:17:53.830 "trtype": "TCP" 00:17:53.830 } 00:17:53.830 ] 00:17:53.830 } 00:17:53.830 ] 00:17:53.830 }' 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.830 Malloc1 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.830 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.092 [2024-11-25 14:15:58.922977] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.092 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.092 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:17:54.092 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:54.092 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:17:54.092 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:54.092 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.092 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:54.092 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.092 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:54.092 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.092 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:54.092 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:54.092 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:17:54.092 [2024-11-25 14:15:58.959892] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:17:54.092 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:54.092 could not add new controller: failed to write to nvme-fabrics device 00:17:54.092 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:54.092 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:54.092 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:54.092 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:54.092 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.092 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.093 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.093 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.093 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:55.480 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:55.480 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:55.480 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:55.480 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:55.480 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:58.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:58.025 [2024-11-25 14:16:02.767125] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:17:58.025 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:58.025 could not add new controller: failed to write to nvme-fabrics device 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.025 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.026 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:59.412 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:59.412 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:59.412 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:59.412 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:59.412 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:01.326 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:01.326 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:01.326 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:01.326 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:01.326 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:01.326 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:01.326 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:01.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:01.587 [2024-11-25 14:16:06.522406] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.587 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:02.973 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:02.973 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:02.973 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:02.973 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:02.973 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:05.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.518 [2024-11-25 14:16:10.250088] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.518 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:06.901 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:06.901 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:06.901 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:06.901 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:06.901 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:08.816 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:08.816 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:08.816 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:08.816 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:08.816 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:08.816 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:08.816 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:09.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:09.078 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:09.078 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:09.078 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:09.078 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:09.078 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:09.078 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:09.078 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:09.078 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:09.078 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.078 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.078 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.078 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:09.078 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.078 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.078 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.078 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:09.078 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:09.078 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.078 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.078 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.078 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:09.078 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.078 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.078 [2024-11-25 14:16:14.000847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.078 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.078 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:09.078 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.078 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.078 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.079 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:09.079 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.079 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.079 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.079 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:10.464 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:10.464 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:10.464 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:10.464 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:10.464 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:13.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.008 [2024-11-25 14:16:17.724479] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.008 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:14.389 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:14.389 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:14.389 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:14.389 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:14.389 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:16.300 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:16.300 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:16.300 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:16.300 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:16.300 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:16.300 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:16.300 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:16.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.560 [2024-11-25 14:16:21.501536] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.560 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.561 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.561 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:16.561 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.561 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.561 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.561 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:17.944 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:17.944 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:17.944 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:17.944 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:17.944 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:20.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.536 [2024-11-25 14:16:25.337480] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.536 [2024-11-25 14:16:25.409678] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.536 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.537 [2024-11-25 14:16:25.477863] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.537 [2024-11-25 14:16:25.550083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.537 [2024-11-25 14:16:25.618300] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.537 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.798 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.798 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:20.798 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.798 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.798 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.798 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:20.798 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.798 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.798 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.798 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:20.798 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.798 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.798 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.798 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:18:20.798 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.798 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:20.798 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.798 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:18:20.798 "tick_rate": 2400000000, 00:18:20.798 "poll_groups": [ 00:18:20.798 { 00:18:20.798 "name": "nvmf_tgt_poll_group_000", 00:18:20.798 "admin_qpairs": 0, 00:18:20.798 "io_qpairs": 224, 00:18:20.798 "current_admin_qpairs": 0, 00:18:20.798 "current_io_qpairs": 0, 00:18:20.798 "pending_bdev_io": 0, 00:18:20.798 "completed_nvme_io": 273, 00:18:20.798 "transports": [ 00:18:20.798 { 00:18:20.798 "trtype": "TCP" 00:18:20.798 } 00:18:20.798 ] 00:18:20.798 }, 00:18:20.798 { 00:18:20.798 "name": "nvmf_tgt_poll_group_001", 00:18:20.798 "admin_qpairs": 1, 00:18:20.798 "io_qpairs": 223, 00:18:20.798 "current_admin_qpairs": 0, 00:18:20.798 "current_io_qpairs": 0, 00:18:20.798 "pending_bdev_io": 0, 00:18:20.798 "completed_nvme_io": 518, 00:18:20.798 "transports": [ 00:18:20.798 { 00:18:20.798 "trtype": "TCP" 00:18:20.798 } 00:18:20.798 ] 00:18:20.798 }, 00:18:20.798 { 00:18:20.798 "name": "nvmf_tgt_poll_group_002", 00:18:20.798 "admin_qpairs": 6, 00:18:20.798 "io_qpairs": 218, 00:18:20.798 "current_admin_qpairs": 0, 00:18:20.798 "current_io_qpairs": 0, 00:18:20.798 "pending_bdev_io": 0, 00:18:20.798 "completed_nvme_io": 219, 00:18:20.798 "transports": [ 00:18:20.798 { 00:18:20.798 "trtype": "TCP" 00:18:20.798 } 00:18:20.798 ] 00:18:20.798 }, 00:18:20.798 { 00:18:20.798 "name": "nvmf_tgt_poll_group_003", 00:18:20.799 "admin_qpairs": 0, 00:18:20.799 "io_qpairs": 224, 00:18:20.799 "current_admin_qpairs": 0, 00:18:20.799 "current_io_qpairs": 0, 00:18:20.799 "pending_bdev_io": 0, 00:18:20.799 "completed_nvme_io": 229, 00:18:20.799 "transports": [ 00:18:20.799 { 00:18:20.799 "trtype": "TCP" 00:18:20.799 } 00:18:20.799 ] 00:18:20.799 } 00:18:20.799 ] 00:18:20.799 }' 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:20.799 rmmod nvme_tcp 00:18:20.799 rmmod nvme_fabrics 00:18:20.799 rmmod nvme_keyring 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3348251 ']' 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3348251 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3348251 ']' 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3348251 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.799 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3348251 00:18:21.059 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:21.059 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:21.059 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3348251' 00:18:21.059 killing process with pid 3348251 00:18:21.059 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3348251 00:18:21.059 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3348251 00:18:21.059 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:21.059 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:21.059 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:21.059 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:18:21.059 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:18:21.059 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:21.059 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:18:21.059 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:21.059 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:21.059 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.059 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:21.059 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:23.601 00:18:23.601 real 0m38.316s 00:18:23.601 user 1m54.668s 00:18:23.601 sys 0m7.985s 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:23.601 ************************************ 00:18:23.601 END TEST nvmf_rpc 00:18:23.601 ************************************ 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:23.601 ************************************ 00:18:23.601 START TEST nvmf_invalid 00:18:23.601 ************************************ 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:23.601 * Looking for test storage... 00:18:23.601 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:23.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.601 --rc genhtml_branch_coverage=1 00:18:23.601 --rc genhtml_function_coverage=1 00:18:23.601 --rc genhtml_legend=1 00:18:23.601 --rc geninfo_all_blocks=1 00:18:23.601 --rc geninfo_unexecuted_blocks=1 00:18:23.601 00:18:23.601 ' 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:23.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.601 --rc genhtml_branch_coverage=1 00:18:23.601 --rc genhtml_function_coverage=1 00:18:23.601 --rc genhtml_legend=1 00:18:23.601 --rc geninfo_all_blocks=1 00:18:23.601 --rc geninfo_unexecuted_blocks=1 00:18:23.601 00:18:23.601 ' 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:23.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.601 --rc genhtml_branch_coverage=1 00:18:23.601 --rc genhtml_function_coverage=1 00:18:23.601 --rc genhtml_legend=1 00:18:23.601 --rc geninfo_all_blocks=1 00:18:23.601 --rc geninfo_unexecuted_blocks=1 00:18:23.601 00:18:23.601 ' 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:23.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.601 --rc genhtml_branch_coverage=1 00:18:23.601 --rc genhtml_function_coverage=1 00:18:23.601 --rc genhtml_legend=1 00:18:23.601 --rc geninfo_all_blocks=1 00:18:23.601 --rc geninfo_unexecuted_blocks=1 00:18:23.601 00:18:23.601 ' 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:23.601 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:23.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:18:23.602 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:31.918 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:31.918 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:31.918 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:31.918 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:31.918 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:31.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:18:31.919 00:18:31.919 --- 10.0.0.2 ping statistics --- 00:18:31.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.919 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:31.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:18:31.919 00:18:31.919 --- 10.0.0.1 ping statistics --- 00:18:31.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.919 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3357964 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3357964 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3357964 ']' 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.919 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:31.919 [2024-11-25 14:16:36.006716] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:18:31.919 [2024-11-25 14:16:36.006787] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.919 [2024-11-25 14:16:36.111152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:31.919 [2024-11-25 14:16:36.165253] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.919 [2024-11-25 14:16:36.165306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.919 [2024-11-25 14:16:36.165315] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.919 [2024-11-25 14:16:36.165322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.919 [2024-11-25 14:16:36.165328] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.919 [2024-11-25 14:16:36.167394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.919 [2024-11-25 14:16:36.167673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.919 [2024-11-25 14:16:36.167837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:31.919 [2024-11-25 14:16:36.167838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.919 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.919 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:18:31.919 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:31.919 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:31.919 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:31.919 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.919 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:31.919 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode30817 00:18:32.180 [2024-11-25 14:16:37.044900] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:32.180 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:18:32.180 { 00:18:32.180 "nqn": "nqn.2016-06.io.spdk:cnode30817", 00:18:32.180 "tgt_name": "foobar", 00:18:32.180 "method": "nvmf_create_subsystem", 00:18:32.180 "req_id": 1 00:18:32.180 } 00:18:32.180 Got JSON-RPC error response 00:18:32.180 response: 00:18:32.180 { 00:18:32.180 "code": -32603, 00:18:32.180 "message": "Unable to find target foobar" 00:18:32.180 }' 00:18:32.180 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:18:32.180 { 00:18:32.180 "nqn": "nqn.2016-06.io.spdk:cnode30817", 00:18:32.180 "tgt_name": "foobar", 00:18:32.180 "method": "nvmf_create_subsystem", 00:18:32.180 "req_id": 1 00:18:32.180 } 00:18:32.180 Got JSON-RPC error response 00:18:32.180 response: 00:18:32.180 { 00:18:32.180 "code": -32603, 00:18:32.180 "message": "Unable to find target foobar" 00:18:32.180 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:32.180 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:32.180 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode1668 00:18:32.180 [2024-11-25 14:16:37.249786] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1668: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:18:32.441 { 00:18:32.441 "nqn": "nqn.2016-06.io.spdk:cnode1668", 00:18:32.441 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:32.441 "method": "nvmf_create_subsystem", 00:18:32.441 "req_id": 1 00:18:32.441 } 00:18:32.441 Got JSON-RPC error response 00:18:32.441 response: 00:18:32.441 { 00:18:32.441 "code": -32602, 00:18:32.441 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:32.441 }' 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:18:32.441 { 00:18:32.441 "nqn": "nqn.2016-06.io.spdk:cnode1668", 00:18:32.441 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:32.441 "method": "nvmf_create_subsystem", 00:18:32.441 "req_id": 1 00:18:32.441 } 00:18:32.441 Got JSON-RPC error response 00:18:32.441 response: 00:18:32.441 { 00:18:32.441 "code": -32602, 00:18:32.441 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:32.441 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode15478 00:18:32.441 [2024-11-25 14:16:37.458528] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15478: invalid model number 'SPDK_Controller' 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:18:32.441 { 00:18:32.441 "nqn": "nqn.2016-06.io.spdk:cnode15478", 00:18:32.441 "model_number": "SPDK_Controller\u001f", 00:18:32.441 "method": "nvmf_create_subsystem", 00:18:32.441 "req_id": 1 00:18:32.441 } 00:18:32.441 Got JSON-RPC error response 00:18:32.441 response: 00:18:32.441 { 00:18:32.441 "code": -32602, 00:18:32.441 "message": "Invalid MN SPDK_Controller\u001f" 00:18:32.441 }' 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:18:32.441 { 00:18:32.441 "nqn": "nqn.2016-06.io.spdk:cnode15478", 00:18:32.441 "model_number": "SPDK_Controller\u001f", 00:18:32.441 "method": "nvmf_create_subsystem", 00:18:32.441 "req_id": 1 00:18:32.441 } 00:18:32.441 Got JSON-RPC error response 00:18:32.441 response: 00:18:32.441 { 00:18:32.441 "code": -32602, 00:18:32.441 "message": "Invalid MN SPDK_Controller\u001f" 00:18:32.441 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.441 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:18:32.702 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ / == \- ]] 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '/OjYG<[@ISp}]3:+8"hi!' 00:18:32.703 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '/OjYG<[@ISp}]3:+8"hi!' nqn.2016-06.io.spdk:cnode26881 00:18:32.965 [2024-11-25 14:16:37.844026] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26881: invalid serial number '/OjYG<[@ISp}]3:+8"hi!' 00:18:32.965 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:18:32.965 { 00:18:32.965 "nqn": "nqn.2016-06.io.spdk:cnode26881", 00:18:32.965 "serial_number": "/OjYG<[@ISp}]3:+8\"hi!", 00:18:32.965 "method": "nvmf_create_subsystem", 00:18:32.965 "req_id": 1 00:18:32.965 } 00:18:32.965 Got JSON-RPC error response 00:18:32.965 response: 00:18:32.965 { 00:18:32.965 "code": -32602, 00:18:32.965 "message": "Invalid SN /OjYG<[@ISp}]3:+8\"hi!" 00:18:32.965 }' 00:18:32.965 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:18:32.965 { 00:18:32.965 "nqn": "nqn.2016-06.io.spdk:cnode26881", 00:18:32.965 "serial_number": "/OjYG<[@ISp}]3:+8\"hi!", 00:18:32.965 "method": "nvmf_create_subsystem", 00:18:32.965 "req_id": 1 00:18:32.965 } 00:18:32.965 Got JSON-RPC error response 00:18:32.965 response: 00:18:32.965 { 00:18:32.965 "code": -32602, 00:18:32.965 "message": "Invalid SN /OjYG<[@ISp}]3:+8\"hi!" 00:18:32.965 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:32.965 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:18:32.965 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:18:32.965 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:32.965 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:32.965 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:32.965 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:32.965 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.965 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:18:32.965 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:18:32.965 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:18:32.965 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.965 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.965 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:18:32.965 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:18:32.965 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:18:32.965 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.965 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:18:32.966 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:32.966 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:33.228 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:18:33.228 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:18:33.228 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ] == \- ]] 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ']'\''sY\)^e/3.E!5PdP!JX9A=M.9tRAvION:0IW?Up$' 00:18:33.229 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ']'\''sY\)^e/3.E!5PdP!JX9A=M.9tRAvION:0IW?Up$' nqn.2016-06.io.spdk:cnode32340 00:18:33.490 [2024-11-25 14:16:38.390176] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32340: invalid model number ']'sY\)^e/3.E!5PdP!JX9A=M.9tRAvION:0IW?Up$' 00:18:33.490 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:18:33.490 { 00:18:33.490 "nqn": "nqn.2016-06.io.spdk:cnode32340", 00:18:33.490 "model_number": "]'\''sY\\)^e/3.E!5PdP!JX9A=M.9tRAvION:0IW?Up$", 00:18:33.490 "method": "nvmf_create_subsystem", 00:18:33.490 "req_id": 1 00:18:33.490 } 00:18:33.490 Got JSON-RPC error response 00:18:33.490 response: 00:18:33.490 { 00:18:33.490 "code": -32602, 00:18:33.490 "message": "Invalid MN ]'\''sY\\)^e/3.E!5PdP!JX9A=M.9tRAvION:0IW?Up$" 00:18:33.490 }' 00:18:33.490 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:18:33.490 { 00:18:33.490 "nqn": "nqn.2016-06.io.spdk:cnode32340", 00:18:33.490 "model_number": "]'sY\\)^e/3.E!5PdP!JX9A=M.9tRAvION:0IW?Up$", 00:18:33.490 "method": "nvmf_create_subsystem", 00:18:33.490 "req_id": 1 00:18:33.490 } 00:18:33.490 Got JSON-RPC error response 00:18:33.490 response: 00:18:33.490 { 00:18:33.490 "code": -32602, 00:18:33.490 "message": "Invalid MN ]'sY\\)^e/3.E!5PdP!JX9A=M.9tRAvION:0IW?Up$" 00:18:33.490 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:33.490 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:18:33.751 [2024-11-25 14:16:38.591056] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.751 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:18:33.751 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:18:33.751 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:18:33.751 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:18:33.751 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:18:34.012 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:18:34.012 [2024-11-25 14:16:39.008658] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:18:34.012 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:18:34.012 { 00:18:34.012 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:34.012 "listen_address": { 00:18:34.012 "trtype": "tcp", 00:18:34.012 "traddr": "", 00:18:34.012 "trsvcid": "4421" 00:18:34.012 }, 00:18:34.012 "method": "nvmf_subsystem_remove_listener", 00:18:34.012 "req_id": 1 00:18:34.012 } 00:18:34.012 Got JSON-RPC error response 00:18:34.012 response: 00:18:34.012 { 00:18:34.012 "code": -32602, 00:18:34.012 "message": "Invalid parameters" 00:18:34.012 }' 00:18:34.012 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:18:34.012 { 00:18:34.012 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:34.012 "listen_address": { 00:18:34.012 "trtype": "tcp", 00:18:34.012 "traddr": "", 00:18:34.012 "trsvcid": "4421" 00:18:34.012 }, 00:18:34.012 "method": "nvmf_subsystem_remove_listener", 00:18:34.012 "req_id": 1 00:18:34.012 } 00:18:34.012 Got JSON-RPC error response 00:18:34.012 response: 00:18:34.012 { 00:18:34.012 "code": -32602, 00:18:34.012 "message": "Invalid parameters" 00:18:34.012 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:18:34.012 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7833 -i 0 00:18:34.277 [2024-11-25 14:16:39.213477] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7833: invalid cntlid range [0-65519] 00:18:34.277 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:18:34.277 { 00:18:34.277 "nqn": "nqn.2016-06.io.spdk:cnode7833", 00:18:34.277 "min_cntlid": 0, 00:18:34.277 "method": "nvmf_create_subsystem", 00:18:34.277 "req_id": 1 00:18:34.277 } 00:18:34.277 Got JSON-RPC error response 00:18:34.277 response: 00:18:34.277 { 00:18:34.277 "code": -32602, 00:18:34.278 "message": "Invalid cntlid range [0-65519]" 00:18:34.278 }' 00:18:34.278 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:18:34.278 { 00:18:34.278 "nqn": "nqn.2016-06.io.spdk:cnode7833", 00:18:34.278 "min_cntlid": 0, 00:18:34.278 "method": "nvmf_create_subsystem", 00:18:34.278 "req_id": 1 00:18:34.278 } 00:18:34.278 Got JSON-RPC error response 00:18:34.278 response: 00:18:34.278 { 00:18:34.278 "code": -32602, 00:18:34.278 "message": "Invalid cntlid range [0-65519]" 00:18:34.278 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:34.278 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11777 -i 65520 00:18:34.540 [2024-11-25 14:16:39.418150] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11777: invalid cntlid range [65520-65519] 00:18:34.540 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:18:34.540 { 00:18:34.540 "nqn": "nqn.2016-06.io.spdk:cnode11777", 00:18:34.540 "min_cntlid": 65520, 00:18:34.540 "method": "nvmf_create_subsystem", 00:18:34.540 "req_id": 1 00:18:34.540 } 00:18:34.540 Got JSON-RPC error response 00:18:34.540 response: 00:18:34.540 { 00:18:34.540 "code": -32602, 00:18:34.540 "message": "Invalid cntlid range [65520-65519]" 00:18:34.540 }' 00:18:34.540 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:18:34.540 { 00:18:34.540 "nqn": "nqn.2016-06.io.spdk:cnode11777", 00:18:34.540 "min_cntlid": 65520, 00:18:34.540 "method": "nvmf_create_subsystem", 00:18:34.540 "req_id": 1 00:18:34.540 } 00:18:34.540 Got JSON-RPC error response 00:18:34.540 response: 00:18:34.540 { 00:18:34.540 "code": -32602, 00:18:34.540 "message": "Invalid cntlid range [65520-65519]" 00:18:34.540 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:34.540 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17987 -I 0 00:18:34.540 [2024-11-25 14:16:39.598738] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17987: invalid cntlid range [1-0] 00:18:34.801 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:18:34.801 { 00:18:34.802 "nqn": "nqn.2016-06.io.spdk:cnode17987", 00:18:34.802 "max_cntlid": 0, 00:18:34.802 "method": "nvmf_create_subsystem", 00:18:34.802 "req_id": 1 00:18:34.802 } 00:18:34.802 Got JSON-RPC error response 00:18:34.802 response: 00:18:34.802 { 00:18:34.802 "code": -32602, 00:18:34.802 "message": "Invalid cntlid range [1-0]" 00:18:34.802 }' 00:18:34.802 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:18:34.802 { 00:18:34.802 "nqn": "nqn.2016-06.io.spdk:cnode17987", 00:18:34.802 "max_cntlid": 0, 00:18:34.802 "method": "nvmf_create_subsystem", 00:18:34.802 "req_id": 1 00:18:34.802 } 00:18:34.802 Got JSON-RPC error response 00:18:34.802 response: 00:18:34.802 { 00:18:34.802 "code": -32602, 00:18:34.802 "message": "Invalid cntlid range [1-0]" 00:18:34.802 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:34.802 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7736 -I 65520 00:18:34.802 [2024-11-25 14:16:39.787329] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7736: invalid cntlid range [1-65520] 00:18:34.802 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:18:34.802 { 00:18:34.802 "nqn": "nqn.2016-06.io.spdk:cnode7736", 00:18:34.802 "max_cntlid": 65520, 00:18:34.802 "method": "nvmf_create_subsystem", 00:18:34.802 "req_id": 1 00:18:34.802 } 00:18:34.802 Got JSON-RPC error response 00:18:34.802 response: 00:18:34.802 { 00:18:34.802 "code": -32602, 00:18:34.802 "message": "Invalid cntlid range [1-65520]" 00:18:34.802 }' 00:18:34.802 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:18:34.802 { 00:18:34.802 "nqn": "nqn.2016-06.io.spdk:cnode7736", 00:18:34.802 "max_cntlid": 65520, 00:18:34.802 "method": "nvmf_create_subsystem", 00:18:34.802 "req_id": 1 00:18:34.802 } 00:18:34.802 Got JSON-RPC error response 00:18:34.802 response: 00:18:34.802 { 00:18:34.802 "code": -32602, 00:18:34.802 "message": "Invalid cntlid range [1-65520]" 00:18:34.802 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:34.802 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25076 -i 6 -I 5 00:18:35.063 [2024-11-25 14:16:39.975928] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25076: invalid cntlid range [6-5] 00:18:35.063 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:18:35.063 { 00:18:35.063 "nqn": "nqn.2016-06.io.spdk:cnode25076", 00:18:35.063 "min_cntlid": 6, 00:18:35.063 "max_cntlid": 5, 00:18:35.063 "method": "nvmf_create_subsystem", 00:18:35.063 "req_id": 1 00:18:35.063 } 00:18:35.063 Got JSON-RPC error response 00:18:35.063 response: 00:18:35.063 { 00:18:35.063 "code": -32602, 00:18:35.063 "message": "Invalid cntlid range [6-5]" 00:18:35.063 }' 00:18:35.063 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:18:35.063 { 00:18:35.063 "nqn": "nqn.2016-06.io.spdk:cnode25076", 00:18:35.063 "min_cntlid": 6, 00:18:35.063 "max_cntlid": 5, 00:18:35.063 "method": "nvmf_create_subsystem", 00:18:35.063 "req_id": 1 00:18:35.063 } 00:18:35.063 Got JSON-RPC error response 00:18:35.063 response: 00:18:35.063 { 00:18:35.063 "code": -32602, 00:18:35.063 "message": "Invalid cntlid range [6-5]" 00:18:35.063 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:35.063 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:18:35.063 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:18:35.063 { 00:18:35.063 "name": "foobar", 00:18:35.063 "method": "nvmf_delete_target", 00:18:35.063 "req_id": 1 00:18:35.063 } 00:18:35.063 Got JSON-RPC error response 00:18:35.063 response: 00:18:35.063 { 00:18:35.063 "code": -32602, 00:18:35.063 "message": "The specified target doesn'\''t exist, cannot delete it." 00:18:35.063 }' 00:18:35.063 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:18:35.063 { 00:18:35.063 "name": "foobar", 00:18:35.063 "method": "nvmf_delete_target", 00:18:35.063 "req_id": 1 00:18:35.063 } 00:18:35.063 Got JSON-RPC error response 00:18:35.063 response: 00:18:35.063 { 00:18:35.063 "code": -32602, 00:18:35.063 "message": "The specified target doesn't exist, cannot delete it." 00:18:35.063 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:18:35.063 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:18:35.063 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:18:35.063 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:35.063 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:18:35.063 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:35.063 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:18:35.063 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:35.063 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:35.063 rmmod nvme_tcp 00:18:35.063 rmmod nvme_fabrics 00:18:35.365 rmmod nvme_keyring 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3357964 ']' 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3357964 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3357964 ']' 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3357964 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3357964 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3357964' 00:18:35.365 killing process with pid 3357964 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3357964 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3357964 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:35.365 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.906 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:37.906 00:18:37.906 real 0m14.230s 00:18:37.906 user 0m21.399s 00:18:37.906 sys 0m6.783s 00:18:37.906 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:37.906 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:37.906 ************************************ 00:18:37.906 END TEST nvmf_invalid 00:18:37.906 ************************************ 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:37.907 ************************************ 00:18:37.907 START TEST nvmf_connect_stress 00:18:37.907 ************************************ 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:37.907 * Looking for test storage... 00:18:37.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:37.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.907 --rc genhtml_branch_coverage=1 00:18:37.907 --rc genhtml_function_coverage=1 00:18:37.907 --rc genhtml_legend=1 00:18:37.907 --rc geninfo_all_blocks=1 00:18:37.907 --rc geninfo_unexecuted_blocks=1 00:18:37.907 00:18:37.907 ' 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:37.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.907 --rc genhtml_branch_coverage=1 00:18:37.907 --rc genhtml_function_coverage=1 00:18:37.907 --rc genhtml_legend=1 00:18:37.907 --rc geninfo_all_blocks=1 00:18:37.907 --rc geninfo_unexecuted_blocks=1 00:18:37.907 00:18:37.907 ' 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:37.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.907 --rc genhtml_branch_coverage=1 00:18:37.907 --rc genhtml_function_coverage=1 00:18:37.907 --rc genhtml_legend=1 00:18:37.907 --rc geninfo_all_blocks=1 00:18:37.907 --rc geninfo_unexecuted_blocks=1 00:18:37.907 00:18:37.907 ' 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:37.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:37.907 --rc genhtml_branch_coverage=1 00:18:37.907 --rc genhtml_function_coverage=1 00:18:37.907 --rc genhtml_legend=1 00:18:37.907 --rc geninfo_all_blocks=1 00:18:37.907 --rc geninfo_unexecuted_blocks=1 00:18:37.907 00:18:37.907 ' 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:37.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:18:37.907 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:46.046 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:46.046 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:46.046 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:46.046 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:46.046 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:46.046 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:46.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:46.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:18:46.047 00:18:46.047 --- 10.0.0.2 ping statistics --- 00:18:46.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.047 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:46.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:46.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:18:46.047 00:18:46.047 --- 10.0.0.1 ping statistics --- 00:18:46.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.047 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3363172 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3363172 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3363172 ']' 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.047 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:46.047 [2024-11-25 14:16:50.377923] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:18:46.047 [2024-11-25 14:16:50.377990] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.047 [2024-11-25 14:16:50.482104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:46.047 [2024-11-25 14:16:50.533968] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.047 [2024-11-25 14:16:50.534028] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.047 [2024-11-25 14:16:50.534038] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.047 [2024-11-25 14:16:50.534045] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.047 [2024-11-25 14:16:50.534051] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.047 [2024-11-25 14:16:50.536202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.047 [2024-11-25 14:16:50.536438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.047 [2024-11-25 14:16:50.536438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:46.308 [2024-11-25 14:16:51.260833] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:46.308 [2024-11-25 14:16:51.286560] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:46.308 NULL1 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3363512 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:46.308 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:46.309 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:46.309 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:46.309 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:46.309 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:46.309 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:46.309 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:46.309 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:46.309 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:46.309 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:46.309 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:46.309 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:46.309 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:46.309 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:46.309 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:46.309 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:46.309 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:46.309 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:46.309 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:46.309 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:46.309 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:46.569 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:46.569 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:46.569 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:46.569 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:46.569 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:46.569 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:46.569 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:46.569 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:46.569 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.569 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:46.829 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.829 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:46.829 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:46.829 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.829 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:47.091 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.091 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:47.091 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:47.091 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.091 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:47.351 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.351 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:47.352 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:47.352 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.352 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:47.923 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.923 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:47.923 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:47.923 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.923 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:48.184 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.184 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:48.184 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:48.184 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.184 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:48.444 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.444 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:48.444 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:48.444 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.444 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:48.703 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.703 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:48.703 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:48.703 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.703 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:48.963 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.963 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:48.963 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:48.963 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.963 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.533 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.533 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:49.533 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:49.533 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.533 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.792 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.793 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:49.793 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:49.793 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.793 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:50.078 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.078 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:50.078 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:50.078 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.078 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:50.338 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.338 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:50.338 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:50.338 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.338 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:50.598 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.598 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:50.598 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:50.598 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.598 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:51.170 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.170 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:51.170 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:51.170 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.170 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:51.430 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.430 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:51.430 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:51.430 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.430 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:51.690 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.690 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:51.690 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:51.690 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.690 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:51.950 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.950 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:51.950 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:51.950 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.950 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:52.210 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.210 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:52.210 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:52.210 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.210 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:52.781 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.781 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:52.781 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:52.781 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.781 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:53.042 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.042 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:53.042 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:53.042 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.042 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:53.303 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.303 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:53.303 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:53.303 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.303 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:53.563 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.563 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:53.563 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:53.563 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.563 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:53.823 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.823 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:53.823 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:53.823 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.823 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:54.392 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.392 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:54.392 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:54.392 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.393 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:54.654 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.654 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:54.654 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:54.654 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.654 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:54.916 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.916 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:54.916 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:54.916 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.916 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:55.177 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.177 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:55.177 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:55.177 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.177 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:55.437 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.437 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:55.437 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:55.437 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.437 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:56.008 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.008 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:56.008 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:56.008 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.008 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:56.268 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.268 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:56.268 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:56.268 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.268 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:56.529 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:56.529 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.529 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3363512 00:18:56.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3363512) - No such process 00:18:56.529 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3363512 00:18:56.529 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:56.529 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:56.529 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:56.529 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:56.529 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:56.529 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:56.529 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:56.529 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:56.529 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:56.529 rmmod nvme_tcp 00:18:56.529 rmmod nvme_fabrics 00:18:56.529 rmmod nvme_keyring 00:18:56.529 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:56.529 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:56.529 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:56.529 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3363172 ']' 00:18:56.529 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3363172 00:18:56.529 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3363172 ']' 00:18:56.529 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3363172 00:18:56.529 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:18:56.529 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.529 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3363172 00:18:56.790 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:56.790 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:56.790 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3363172' 00:18:56.790 killing process with pid 3363172 00:18:56.790 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3363172 00:18:56.790 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3363172 00:18:56.790 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:56.790 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:56.790 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:56.790 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:18:56.790 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:18:56.790 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:56.790 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:18:56.790 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:56.790 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:56.790 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.790 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:56.790 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.332 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:59.332 00:18:59.332 real 0m21.319s 00:18:59.332 user 0m42.060s 00:18:59.332 sys 0m9.478s 00:18:59.332 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:59.332 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:59.332 ************************************ 00:18:59.332 END TEST nvmf_connect_stress 00:18:59.332 ************************************ 00:18:59.332 14:17:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:59.332 14:17:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:59.332 14:17:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:59.332 14:17:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:59.332 ************************************ 00:18:59.332 START TEST nvmf_fused_ordering 00:18:59.332 ************************************ 00:18:59.332 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:59.332 * Looking for test storage... 00:18:59.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.332 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:59.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.333 --rc genhtml_branch_coverage=1 00:18:59.333 --rc genhtml_function_coverage=1 00:18:59.333 --rc genhtml_legend=1 00:18:59.333 --rc geninfo_all_blocks=1 00:18:59.333 --rc geninfo_unexecuted_blocks=1 00:18:59.333 00:18:59.333 ' 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:59.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.333 --rc genhtml_branch_coverage=1 00:18:59.333 --rc genhtml_function_coverage=1 00:18:59.333 --rc genhtml_legend=1 00:18:59.333 --rc geninfo_all_blocks=1 00:18:59.333 --rc geninfo_unexecuted_blocks=1 00:18:59.333 00:18:59.333 ' 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:59.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.333 --rc genhtml_branch_coverage=1 00:18:59.333 --rc genhtml_function_coverage=1 00:18:59.333 --rc genhtml_legend=1 00:18:59.333 --rc geninfo_all_blocks=1 00:18:59.333 --rc geninfo_unexecuted_blocks=1 00:18:59.333 00:18:59.333 ' 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:59.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.333 --rc genhtml_branch_coverage=1 00:18:59.333 --rc genhtml_function_coverage=1 00:18:59.333 --rc genhtml_legend=1 00:18:59.333 --rc geninfo_all_blocks=1 00:18:59.333 --rc geninfo_unexecuted_blocks=1 00:18:59.333 00:18:59.333 ' 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:59.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:59.333 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.334 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:59.334 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:59.334 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:59.334 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:07.467 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:07.467 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:07.467 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:07.468 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:07.468 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:07.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:19:07.468 00:19:07.468 --- 10.0.0.2 ping statistics --- 00:19:07.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.468 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:07.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:19:07.468 00:19:07.468 --- 10.0.0.1 ping statistics --- 00:19:07.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.468 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3369576 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3369576 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3369576 ']' 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.468 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:07.468 [2024-11-25 14:17:11.743413] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:19:07.468 [2024-11-25 14:17:11.743479] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.468 [2024-11-25 14:17:11.842518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.468 [2024-11-25 14:17:11.894098] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.468 [2024-11-25 14:17:11.894147] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.468 [2024-11-25 14:17:11.894155] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.468 [2024-11-25 14:17:11.894176] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.468 [2024-11-25 14:17:11.894183] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.468 [2024-11-25 14:17:11.894968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:07.729 [2024-11-25 14:17:12.609890] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:07.729 [2024-11-25 14:17:12.634115] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:07.729 NULL1 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.729 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:07.729 [2024-11-25 14:17:12.703063] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:19:07.729 [2024-11-25 14:17:12.703112] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3369903 ] 00:19:08.301 Attached to nqn.2016-06.io.spdk:cnode1 00:19:08.301 Namespace ID: 1 size: 1GB 00:19:08.301 fused_ordering(0) 00:19:08.301 fused_ordering(1) 00:19:08.301 fused_ordering(2) 00:19:08.301 fused_ordering(3) 00:19:08.301 fused_ordering(4) 00:19:08.301 fused_ordering(5) 00:19:08.301 fused_ordering(6) 00:19:08.301 fused_ordering(7) 00:19:08.301 fused_ordering(8) 00:19:08.301 fused_ordering(9) 00:19:08.301 fused_ordering(10) 00:19:08.301 fused_ordering(11) 00:19:08.301 fused_ordering(12) 00:19:08.301 fused_ordering(13) 00:19:08.301 fused_ordering(14) 00:19:08.301 fused_ordering(15) 00:19:08.301 fused_ordering(16) 00:19:08.301 fused_ordering(17) 00:19:08.301 fused_ordering(18) 00:19:08.301 fused_ordering(19) 00:19:08.301 fused_ordering(20) 00:19:08.301 fused_ordering(21) 00:19:08.301 fused_ordering(22) 00:19:08.301 fused_ordering(23) 00:19:08.301 fused_ordering(24) 00:19:08.301 fused_ordering(25) 00:19:08.301 fused_ordering(26) 00:19:08.301 fused_ordering(27) 00:19:08.301 fused_ordering(28) 00:19:08.301 fused_ordering(29) 00:19:08.301 fused_ordering(30) 00:19:08.301 fused_ordering(31) 00:19:08.301 fused_ordering(32) 00:19:08.301 fused_ordering(33) 00:19:08.301 fused_ordering(34) 00:19:08.301 fused_ordering(35) 00:19:08.301 fused_ordering(36) 00:19:08.301 fused_ordering(37) 00:19:08.301 fused_ordering(38) 00:19:08.301 fused_ordering(39) 00:19:08.301 fused_ordering(40) 00:19:08.301 fused_ordering(41) 00:19:08.301 fused_ordering(42) 00:19:08.301 fused_ordering(43) 00:19:08.301 fused_ordering(44) 00:19:08.301 fused_ordering(45) 00:19:08.301 fused_ordering(46) 00:19:08.301 fused_ordering(47) 00:19:08.301 fused_ordering(48) 00:19:08.301 fused_ordering(49) 00:19:08.301 fused_ordering(50) 00:19:08.301 fused_ordering(51) 00:19:08.301 fused_ordering(52) 00:19:08.301 fused_ordering(53) 00:19:08.301 fused_ordering(54) 00:19:08.301 fused_ordering(55) 00:19:08.301 fused_ordering(56) 00:19:08.301 fused_ordering(57) 00:19:08.301 fused_ordering(58) 00:19:08.301 fused_ordering(59) 00:19:08.301 fused_ordering(60) 00:19:08.301 fused_ordering(61) 00:19:08.301 fused_ordering(62) 00:19:08.301 fused_ordering(63) 00:19:08.301 fused_ordering(64) 00:19:08.301 fused_ordering(65) 00:19:08.301 fused_ordering(66) 00:19:08.301 fused_ordering(67) 00:19:08.301 fused_ordering(68) 00:19:08.301 fused_ordering(69) 00:19:08.301 fused_ordering(70) 00:19:08.301 fused_ordering(71) 00:19:08.301 fused_ordering(72) 00:19:08.301 fused_ordering(73) 00:19:08.301 fused_ordering(74) 00:19:08.301 fused_ordering(75) 00:19:08.301 fused_ordering(76) 00:19:08.301 fused_ordering(77) 00:19:08.301 fused_ordering(78) 00:19:08.301 fused_ordering(79) 00:19:08.301 fused_ordering(80) 00:19:08.301 fused_ordering(81) 00:19:08.301 fused_ordering(82) 00:19:08.301 fused_ordering(83) 00:19:08.301 fused_ordering(84) 00:19:08.301 fused_ordering(85) 00:19:08.301 fused_ordering(86) 00:19:08.301 fused_ordering(87) 00:19:08.301 fused_ordering(88) 00:19:08.301 fused_ordering(89) 00:19:08.301 fused_ordering(90) 00:19:08.301 fused_ordering(91) 00:19:08.301 fused_ordering(92) 00:19:08.301 fused_ordering(93) 00:19:08.301 fused_ordering(94) 00:19:08.301 fused_ordering(95) 00:19:08.301 fused_ordering(96) 00:19:08.301 fused_ordering(97) 00:19:08.301 fused_ordering(98) 00:19:08.301 fused_ordering(99) 00:19:08.301 fused_ordering(100) 00:19:08.301 fused_ordering(101) 00:19:08.301 fused_ordering(102) 00:19:08.301 fused_ordering(103) 00:19:08.301 fused_ordering(104) 00:19:08.301 fused_ordering(105) 00:19:08.301 fused_ordering(106) 00:19:08.301 fused_ordering(107) 00:19:08.301 fused_ordering(108) 00:19:08.301 fused_ordering(109) 00:19:08.301 fused_ordering(110) 00:19:08.301 fused_ordering(111) 00:19:08.301 fused_ordering(112) 00:19:08.301 fused_ordering(113) 00:19:08.301 fused_ordering(114) 00:19:08.301 fused_ordering(115) 00:19:08.301 fused_ordering(116) 00:19:08.301 fused_ordering(117) 00:19:08.301 fused_ordering(118) 00:19:08.301 fused_ordering(119) 00:19:08.301 fused_ordering(120) 00:19:08.301 fused_ordering(121) 00:19:08.301 fused_ordering(122) 00:19:08.301 fused_ordering(123) 00:19:08.301 fused_ordering(124) 00:19:08.301 fused_ordering(125) 00:19:08.301 fused_ordering(126) 00:19:08.301 fused_ordering(127) 00:19:08.301 fused_ordering(128) 00:19:08.301 fused_ordering(129) 00:19:08.301 fused_ordering(130) 00:19:08.301 fused_ordering(131) 00:19:08.301 fused_ordering(132) 00:19:08.301 fused_ordering(133) 00:19:08.301 fused_ordering(134) 00:19:08.301 fused_ordering(135) 00:19:08.301 fused_ordering(136) 00:19:08.301 fused_ordering(137) 00:19:08.301 fused_ordering(138) 00:19:08.301 fused_ordering(139) 00:19:08.301 fused_ordering(140) 00:19:08.301 fused_ordering(141) 00:19:08.301 fused_ordering(142) 00:19:08.301 fused_ordering(143) 00:19:08.301 fused_ordering(144) 00:19:08.301 fused_ordering(145) 00:19:08.301 fused_ordering(146) 00:19:08.301 fused_ordering(147) 00:19:08.301 fused_ordering(148) 00:19:08.301 fused_ordering(149) 00:19:08.301 fused_ordering(150) 00:19:08.301 fused_ordering(151) 00:19:08.301 fused_ordering(152) 00:19:08.301 fused_ordering(153) 00:19:08.301 fused_ordering(154) 00:19:08.301 fused_ordering(155) 00:19:08.301 fused_ordering(156) 00:19:08.301 fused_ordering(157) 00:19:08.301 fused_ordering(158) 00:19:08.301 fused_ordering(159) 00:19:08.301 fused_ordering(160) 00:19:08.301 fused_ordering(161) 00:19:08.301 fused_ordering(162) 00:19:08.301 fused_ordering(163) 00:19:08.301 fused_ordering(164) 00:19:08.301 fused_ordering(165) 00:19:08.301 fused_ordering(166) 00:19:08.301 fused_ordering(167) 00:19:08.301 fused_ordering(168) 00:19:08.301 fused_ordering(169) 00:19:08.301 fused_ordering(170) 00:19:08.301 fused_ordering(171) 00:19:08.301 fused_ordering(172) 00:19:08.301 fused_ordering(173) 00:19:08.301 fused_ordering(174) 00:19:08.301 fused_ordering(175) 00:19:08.301 fused_ordering(176) 00:19:08.301 fused_ordering(177) 00:19:08.301 fused_ordering(178) 00:19:08.301 fused_ordering(179) 00:19:08.301 fused_ordering(180) 00:19:08.301 fused_ordering(181) 00:19:08.301 fused_ordering(182) 00:19:08.301 fused_ordering(183) 00:19:08.301 fused_ordering(184) 00:19:08.301 fused_ordering(185) 00:19:08.301 fused_ordering(186) 00:19:08.301 fused_ordering(187) 00:19:08.301 fused_ordering(188) 00:19:08.301 fused_ordering(189) 00:19:08.301 fused_ordering(190) 00:19:08.301 fused_ordering(191) 00:19:08.301 fused_ordering(192) 00:19:08.301 fused_ordering(193) 00:19:08.301 fused_ordering(194) 00:19:08.301 fused_ordering(195) 00:19:08.301 fused_ordering(196) 00:19:08.301 fused_ordering(197) 00:19:08.301 fused_ordering(198) 00:19:08.301 fused_ordering(199) 00:19:08.301 fused_ordering(200) 00:19:08.301 fused_ordering(201) 00:19:08.301 fused_ordering(202) 00:19:08.301 fused_ordering(203) 00:19:08.301 fused_ordering(204) 00:19:08.301 fused_ordering(205) 00:19:08.563 fused_ordering(206) 00:19:08.563 fused_ordering(207) 00:19:08.563 fused_ordering(208) 00:19:08.563 fused_ordering(209) 00:19:08.563 fused_ordering(210) 00:19:08.563 fused_ordering(211) 00:19:08.563 fused_ordering(212) 00:19:08.563 fused_ordering(213) 00:19:08.563 fused_ordering(214) 00:19:08.563 fused_ordering(215) 00:19:08.563 fused_ordering(216) 00:19:08.563 fused_ordering(217) 00:19:08.563 fused_ordering(218) 00:19:08.563 fused_ordering(219) 00:19:08.563 fused_ordering(220) 00:19:08.563 fused_ordering(221) 00:19:08.563 fused_ordering(222) 00:19:08.563 fused_ordering(223) 00:19:08.563 fused_ordering(224) 00:19:08.563 fused_ordering(225) 00:19:08.563 fused_ordering(226) 00:19:08.563 fused_ordering(227) 00:19:08.563 fused_ordering(228) 00:19:08.563 fused_ordering(229) 00:19:08.563 fused_ordering(230) 00:19:08.563 fused_ordering(231) 00:19:08.563 fused_ordering(232) 00:19:08.563 fused_ordering(233) 00:19:08.563 fused_ordering(234) 00:19:08.563 fused_ordering(235) 00:19:08.563 fused_ordering(236) 00:19:08.563 fused_ordering(237) 00:19:08.563 fused_ordering(238) 00:19:08.563 fused_ordering(239) 00:19:08.563 fused_ordering(240) 00:19:08.563 fused_ordering(241) 00:19:08.563 fused_ordering(242) 00:19:08.563 fused_ordering(243) 00:19:08.563 fused_ordering(244) 00:19:08.563 fused_ordering(245) 00:19:08.563 fused_ordering(246) 00:19:08.563 fused_ordering(247) 00:19:08.563 fused_ordering(248) 00:19:08.563 fused_ordering(249) 00:19:08.563 fused_ordering(250) 00:19:08.563 fused_ordering(251) 00:19:08.563 fused_ordering(252) 00:19:08.563 fused_ordering(253) 00:19:08.563 fused_ordering(254) 00:19:08.563 fused_ordering(255) 00:19:08.563 fused_ordering(256) 00:19:08.563 fused_ordering(257) 00:19:08.563 fused_ordering(258) 00:19:08.563 fused_ordering(259) 00:19:08.563 fused_ordering(260) 00:19:08.563 fused_ordering(261) 00:19:08.563 fused_ordering(262) 00:19:08.563 fused_ordering(263) 00:19:08.563 fused_ordering(264) 00:19:08.563 fused_ordering(265) 00:19:08.563 fused_ordering(266) 00:19:08.563 fused_ordering(267) 00:19:08.563 fused_ordering(268) 00:19:08.563 fused_ordering(269) 00:19:08.563 fused_ordering(270) 00:19:08.563 fused_ordering(271) 00:19:08.563 fused_ordering(272) 00:19:08.563 fused_ordering(273) 00:19:08.563 fused_ordering(274) 00:19:08.563 fused_ordering(275) 00:19:08.563 fused_ordering(276) 00:19:08.563 fused_ordering(277) 00:19:08.563 fused_ordering(278) 00:19:08.563 fused_ordering(279) 00:19:08.563 fused_ordering(280) 00:19:08.563 fused_ordering(281) 00:19:08.563 fused_ordering(282) 00:19:08.563 fused_ordering(283) 00:19:08.563 fused_ordering(284) 00:19:08.563 fused_ordering(285) 00:19:08.563 fused_ordering(286) 00:19:08.563 fused_ordering(287) 00:19:08.563 fused_ordering(288) 00:19:08.563 fused_ordering(289) 00:19:08.563 fused_ordering(290) 00:19:08.563 fused_ordering(291) 00:19:08.563 fused_ordering(292) 00:19:08.563 fused_ordering(293) 00:19:08.563 fused_ordering(294) 00:19:08.563 fused_ordering(295) 00:19:08.563 fused_ordering(296) 00:19:08.563 fused_ordering(297) 00:19:08.564 fused_ordering(298) 00:19:08.564 fused_ordering(299) 00:19:08.564 fused_ordering(300) 00:19:08.564 fused_ordering(301) 00:19:08.564 fused_ordering(302) 00:19:08.564 fused_ordering(303) 00:19:08.564 fused_ordering(304) 00:19:08.564 fused_ordering(305) 00:19:08.564 fused_ordering(306) 00:19:08.564 fused_ordering(307) 00:19:08.564 fused_ordering(308) 00:19:08.564 fused_ordering(309) 00:19:08.564 fused_ordering(310) 00:19:08.564 fused_ordering(311) 00:19:08.564 fused_ordering(312) 00:19:08.564 fused_ordering(313) 00:19:08.564 fused_ordering(314) 00:19:08.564 fused_ordering(315) 00:19:08.564 fused_ordering(316) 00:19:08.564 fused_ordering(317) 00:19:08.564 fused_ordering(318) 00:19:08.564 fused_ordering(319) 00:19:08.564 fused_ordering(320) 00:19:08.564 fused_ordering(321) 00:19:08.564 fused_ordering(322) 00:19:08.564 fused_ordering(323) 00:19:08.564 fused_ordering(324) 00:19:08.564 fused_ordering(325) 00:19:08.564 fused_ordering(326) 00:19:08.564 fused_ordering(327) 00:19:08.564 fused_ordering(328) 00:19:08.564 fused_ordering(329) 00:19:08.564 fused_ordering(330) 00:19:08.564 fused_ordering(331) 00:19:08.564 fused_ordering(332) 00:19:08.564 fused_ordering(333) 00:19:08.564 fused_ordering(334) 00:19:08.564 fused_ordering(335) 00:19:08.564 fused_ordering(336) 00:19:08.564 fused_ordering(337) 00:19:08.564 fused_ordering(338) 00:19:08.564 fused_ordering(339) 00:19:08.564 fused_ordering(340) 00:19:08.564 fused_ordering(341) 00:19:08.564 fused_ordering(342) 00:19:08.564 fused_ordering(343) 00:19:08.564 fused_ordering(344) 00:19:08.564 fused_ordering(345) 00:19:08.564 fused_ordering(346) 00:19:08.564 fused_ordering(347) 00:19:08.564 fused_ordering(348) 00:19:08.564 fused_ordering(349) 00:19:08.564 fused_ordering(350) 00:19:08.564 fused_ordering(351) 00:19:08.564 fused_ordering(352) 00:19:08.564 fused_ordering(353) 00:19:08.564 fused_ordering(354) 00:19:08.564 fused_ordering(355) 00:19:08.564 fused_ordering(356) 00:19:08.564 fused_ordering(357) 00:19:08.564 fused_ordering(358) 00:19:08.564 fused_ordering(359) 00:19:08.564 fused_ordering(360) 00:19:08.564 fused_ordering(361) 00:19:08.564 fused_ordering(362) 00:19:08.564 fused_ordering(363) 00:19:08.564 fused_ordering(364) 00:19:08.564 fused_ordering(365) 00:19:08.564 fused_ordering(366) 00:19:08.564 fused_ordering(367) 00:19:08.564 fused_ordering(368) 00:19:08.564 fused_ordering(369) 00:19:08.564 fused_ordering(370) 00:19:08.564 fused_ordering(371) 00:19:08.564 fused_ordering(372) 00:19:08.564 fused_ordering(373) 00:19:08.564 fused_ordering(374) 00:19:08.564 fused_ordering(375) 00:19:08.564 fused_ordering(376) 00:19:08.564 fused_ordering(377) 00:19:08.564 fused_ordering(378) 00:19:08.564 fused_ordering(379) 00:19:08.564 fused_ordering(380) 00:19:08.564 fused_ordering(381) 00:19:08.564 fused_ordering(382) 00:19:08.564 fused_ordering(383) 00:19:08.564 fused_ordering(384) 00:19:08.564 fused_ordering(385) 00:19:08.564 fused_ordering(386) 00:19:08.564 fused_ordering(387) 00:19:08.564 fused_ordering(388) 00:19:08.564 fused_ordering(389) 00:19:08.564 fused_ordering(390) 00:19:08.564 fused_ordering(391) 00:19:08.564 fused_ordering(392) 00:19:08.564 fused_ordering(393) 00:19:08.564 fused_ordering(394) 00:19:08.564 fused_ordering(395) 00:19:08.564 fused_ordering(396) 00:19:08.564 fused_ordering(397) 00:19:08.564 fused_ordering(398) 00:19:08.564 fused_ordering(399) 00:19:08.564 fused_ordering(400) 00:19:08.564 fused_ordering(401) 00:19:08.564 fused_ordering(402) 00:19:08.564 fused_ordering(403) 00:19:08.564 fused_ordering(404) 00:19:08.564 fused_ordering(405) 00:19:08.564 fused_ordering(406) 00:19:08.564 fused_ordering(407) 00:19:08.564 fused_ordering(408) 00:19:08.564 fused_ordering(409) 00:19:08.564 fused_ordering(410) 00:19:09.136 fused_ordering(411) 00:19:09.136 fused_ordering(412) 00:19:09.136 fused_ordering(413) 00:19:09.136 fused_ordering(414) 00:19:09.136 fused_ordering(415) 00:19:09.136 fused_ordering(416) 00:19:09.136 fused_ordering(417) 00:19:09.136 fused_ordering(418) 00:19:09.136 fused_ordering(419) 00:19:09.136 fused_ordering(420) 00:19:09.136 fused_ordering(421) 00:19:09.136 fused_ordering(422) 00:19:09.136 fused_ordering(423) 00:19:09.136 fused_ordering(424) 00:19:09.136 fused_ordering(425) 00:19:09.136 fused_ordering(426) 00:19:09.136 fused_ordering(427) 00:19:09.136 fused_ordering(428) 00:19:09.136 fused_ordering(429) 00:19:09.136 fused_ordering(430) 00:19:09.136 fused_ordering(431) 00:19:09.136 fused_ordering(432) 00:19:09.136 fused_ordering(433) 00:19:09.136 fused_ordering(434) 00:19:09.136 fused_ordering(435) 00:19:09.136 fused_ordering(436) 00:19:09.136 fused_ordering(437) 00:19:09.136 fused_ordering(438) 00:19:09.136 fused_ordering(439) 00:19:09.136 fused_ordering(440) 00:19:09.136 fused_ordering(441) 00:19:09.136 fused_ordering(442) 00:19:09.136 fused_ordering(443) 00:19:09.136 fused_ordering(444) 00:19:09.136 fused_ordering(445) 00:19:09.136 fused_ordering(446) 00:19:09.136 fused_ordering(447) 00:19:09.136 fused_ordering(448) 00:19:09.136 fused_ordering(449) 00:19:09.136 fused_ordering(450) 00:19:09.136 fused_ordering(451) 00:19:09.136 fused_ordering(452) 00:19:09.136 fused_ordering(453) 00:19:09.136 fused_ordering(454) 00:19:09.136 fused_ordering(455) 00:19:09.136 fused_ordering(456) 00:19:09.136 fused_ordering(457) 00:19:09.136 fused_ordering(458) 00:19:09.136 fused_ordering(459) 00:19:09.136 fused_ordering(460) 00:19:09.136 fused_ordering(461) 00:19:09.136 fused_ordering(462) 00:19:09.136 fused_ordering(463) 00:19:09.136 fused_ordering(464) 00:19:09.136 fused_ordering(465) 00:19:09.136 fused_ordering(466) 00:19:09.136 fused_ordering(467) 00:19:09.136 fused_ordering(468) 00:19:09.136 fused_ordering(469) 00:19:09.136 fused_ordering(470) 00:19:09.136 fused_ordering(471) 00:19:09.136 fused_ordering(472) 00:19:09.136 fused_ordering(473) 00:19:09.136 fused_ordering(474) 00:19:09.136 fused_ordering(475) 00:19:09.136 fused_ordering(476) 00:19:09.136 fused_ordering(477) 00:19:09.136 fused_ordering(478) 00:19:09.136 fused_ordering(479) 00:19:09.136 fused_ordering(480) 00:19:09.136 fused_ordering(481) 00:19:09.136 fused_ordering(482) 00:19:09.136 fused_ordering(483) 00:19:09.136 fused_ordering(484) 00:19:09.136 fused_ordering(485) 00:19:09.136 fused_ordering(486) 00:19:09.136 fused_ordering(487) 00:19:09.136 fused_ordering(488) 00:19:09.136 fused_ordering(489) 00:19:09.136 fused_ordering(490) 00:19:09.136 fused_ordering(491) 00:19:09.136 fused_ordering(492) 00:19:09.136 fused_ordering(493) 00:19:09.136 fused_ordering(494) 00:19:09.136 fused_ordering(495) 00:19:09.136 fused_ordering(496) 00:19:09.136 fused_ordering(497) 00:19:09.136 fused_ordering(498) 00:19:09.136 fused_ordering(499) 00:19:09.136 fused_ordering(500) 00:19:09.136 fused_ordering(501) 00:19:09.136 fused_ordering(502) 00:19:09.136 fused_ordering(503) 00:19:09.136 fused_ordering(504) 00:19:09.136 fused_ordering(505) 00:19:09.136 fused_ordering(506) 00:19:09.136 fused_ordering(507) 00:19:09.136 fused_ordering(508) 00:19:09.136 fused_ordering(509) 00:19:09.136 fused_ordering(510) 00:19:09.136 fused_ordering(511) 00:19:09.136 fused_ordering(512) 00:19:09.136 fused_ordering(513) 00:19:09.136 fused_ordering(514) 00:19:09.136 fused_ordering(515) 00:19:09.136 fused_ordering(516) 00:19:09.136 fused_ordering(517) 00:19:09.136 fused_ordering(518) 00:19:09.136 fused_ordering(519) 00:19:09.136 fused_ordering(520) 00:19:09.136 fused_ordering(521) 00:19:09.136 fused_ordering(522) 00:19:09.136 fused_ordering(523) 00:19:09.136 fused_ordering(524) 00:19:09.136 fused_ordering(525) 00:19:09.136 fused_ordering(526) 00:19:09.136 fused_ordering(527) 00:19:09.136 fused_ordering(528) 00:19:09.136 fused_ordering(529) 00:19:09.136 fused_ordering(530) 00:19:09.136 fused_ordering(531) 00:19:09.136 fused_ordering(532) 00:19:09.136 fused_ordering(533) 00:19:09.136 fused_ordering(534) 00:19:09.136 fused_ordering(535) 00:19:09.136 fused_ordering(536) 00:19:09.136 fused_ordering(537) 00:19:09.136 fused_ordering(538) 00:19:09.136 fused_ordering(539) 00:19:09.136 fused_ordering(540) 00:19:09.136 fused_ordering(541) 00:19:09.136 fused_ordering(542) 00:19:09.136 fused_ordering(543) 00:19:09.136 fused_ordering(544) 00:19:09.136 fused_ordering(545) 00:19:09.136 fused_ordering(546) 00:19:09.137 fused_ordering(547) 00:19:09.137 fused_ordering(548) 00:19:09.137 fused_ordering(549) 00:19:09.137 fused_ordering(550) 00:19:09.137 fused_ordering(551) 00:19:09.137 fused_ordering(552) 00:19:09.137 fused_ordering(553) 00:19:09.137 fused_ordering(554) 00:19:09.137 fused_ordering(555) 00:19:09.137 fused_ordering(556) 00:19:09.137 fused_ordering(557) 00:19:09.137 fused_ordering(558) 00:19:09.137 fused_ordering(559) 00:19:09.137 fused_ordering(560) 00:19:09.137 fused_ordering(561) 00:19:09.137 fused_ordering(562) 00:19:09.137 fused_ordering(563) 00:19:09.137 fused_ordering(564) 00:19:09.137 fused_ordering(565) 00:19:09.137 fused_ordering(566) 00:19:09.137 fused_ordering(567) 00:19:09.137 fused_ordering(568) 00:19:09.137 fused_ordering(569) 00:19:09.137 fused_ordering(570) 00:19:09.137 fused_ordering(571) 00:19:09.137 fused_ordering(572) 00:19:09.137 fused_ordering(573) 00:19:09.137 fused_ordering(574) 00:19:09.137 fused_ordering(575) 00:19:09.137 fused_ordering(576) 00:19:09.137 fused_ordering(577) 00:19:09.137 fused_ordering(578) 00:19:09.137 fused_ordering(579) 00:19:09.137 fused_ordering(580) 00:19:09.137 fused_ordering(581) 00:19:09.137 fused_ordering(582) 00:19:09.137 fused_ordering(583) 00:19:09.137 fused_ordering(584) 00:19:09.137 fused_ordering(585) 00:19:09.137 fused_ordering(586) 00:19:09.137 fused_ordering(587) 00:19:09.137 fused_ordering(588) 00:19:09.137 fused_ordering(589) 00:19:09.137 fused_ordering(590) 00:19:09.137 fused_ordering(591) 00:19:09.137 fused_ordering(592) 00:19:09.137 fused_ordering(593) 00:19:09.137 fused_ordering(594) 00:19:09.137 fused_ordering(595) 00:19:09.137 fused_ordering(596) 00:19:09.137 fused_ordering(597) 00:19:09.137 fused_ordering(598) 00:19:09.137 fused_ordering(599) 00:19:09.137 fused_ordering(600) 00:19:09.137 fused_ordering(601) 00:19:09.137 fused_ordering(602) 00:19:09.137 fused_ordering(603) 00:19:09.137 fused_ordering(604) 00:19:09.137 fused_ordering(605) 00:19:09.137 fused_ordering(606) 00:19:09.137 fused_ordering(607) 00:19:09.137 fused_ordering(608) 00:19:09.137 fused_ordering(609) 00:19:09.137 fused_ordering(610) 00:19:09.137 fused_ordering(611) 00:19:09.137 fused_ordering(612) 00:19:09.137 fused_ordering(613) 00:19:09.137 fused_ordering(614) 00:19:09.137 fused_ordering(615) 00:19:09.398 fused_ordering(616) 00:19:09.398 fused_ordering(617) 00:19:09.398 fused_ordering(618) 00:19:09.398 fused_ordering(619) 00:19:09.398 fused_ordering(620) 00:19:09.398 fused_ordering(621) 00:19:09.398 fused_ordering(622) 00:19:09.398 fused_ordering(623) 00:19:09.398 fused_ordering(624) 00:19:09.398 fused_ordering(625) 00:19:09.398 fused_ordering(626) 00:19:09.398 fused_ordering(627) 00:19:09.398 fused_ordering(628) 00:19:09.398 fused_ordering(629) 00:19:09.398 fused_ordering(630) 00:19:09.398 fused_ordering(631) 00:19:09.398 fused_ordering(632) 00:19:09.398 fused_ordering(633) 00:19:09.398 fused_ordering(634) 00:19:09.398 fused_ordering(635) 00:19:09.398 fused_ordering(636) 00:19:09.398 fused_ordering(637) 00:19:09.398 fused_ordering(638) 00:19:09.398 fused_ordering(639) 00:19:09.398 fused_ordering(640) 00:19:09.398 fused_ordering(641) 00:19:09.398 fused_ordering(642) 00:19:09.398 fused_ordering(643) 00:19:09.398 fused_ordering(644) 00:19:09.398 fused_ordering(645) 00:19:09.398 fused_ordering(646) 00:19:09.398 fused_ordering(647) 00:19:09.398 fused_ordering(648) 00:19:09.398 fused_ordering(649) 00:19:09.398 fused_ordering(650) 00:19:09.398 fused_ordering(651) 00:19:09.398 fused_ordering(652) 00:19:09.398 fused_ordering(653) 00:19:09.398 fused_ordering(654) 00:19:09.398 fused_ordering(655) 00:19:09.398 fused_ordering(656) 00:19:09.398 fused_ordering(657) 00:19:09.398 fused_ordering(658) 00:19:09.398 fused_ordering(659) 00:19:09.398 fused_ordering(660) 00:19:09.398 fused_ordering(661) 00:19:09.398 fused_ordering(662) 00:19:09.398 fused_ordering(663) 00:19:09.398 fused_ordering(664) 00:19:09.398 fused_ordering(665) 00:19:09.398 fused_ordering(666) 00:19:09.398 fused_ordering(667) 00:19:09.398 fused_ordering(668) 00:19:09.398 fused_ordering(669) 00:19:09.398 fused_ordering(670) 00:19:09.398 fused_ordering(671) 00:19:09.398 fused_ordering(672) 00:19:09.398 fused_ordering(673) 00:19:09.398 fused_ordering(674) 00:19:09.398 fused_ordering(675) 00:19:09.398 fused_ordering(676) 00:19:09.398 fused_ordering(677) 00:19:09.398 fused_ordering(678) 00:19:09.398 fused_ordering(679) 00:19:09.398 fused_ordering(680) 00:19:09.398 fused_ordering(681) 00:19:09.398 fused_ordering(682) 00:19:09.398 fused_ordering(683) 00:19:09.398 fused_ordering(684) 00:19:09.398 fused_ordering(685) 00:19:09.398 fused_ordering(686) 00:19:09.398 fused_ordering(687) 00:19:09.398 fused_ordering(688) 00:19:09.398 fused_ordering(689) 00:19:09.398 fused_ordering(690) 00:19:09.398 fused_ordering(691) 00:19:09.398 fused_ordering(692) 00:19:09.398 fused_ordering(693) 00:19:09.398 fused_ordering(694) 00:19:09.398 fused_ordering(695) 00:19:09.398 fused_ordering(696) 00:19:09.398 fused_ordering(697) 00:19:09.398 fused_ordering(698) 00:19:09.398 fused_ordering(699) 00:19:09.398 fused_ordering(700) 00:19:09.398 fused_ordering(701) 00:19:09.398 fused_ordering(702) 00:19:09.398 fused_ordering(703) 00:19:09.398 fused_ordering(704) 00:19:09.398 fused_ordering(705) 00:19:09.398 fused_ordering(706) 00:19:09.398 fused_ordering(707) 00:19:09.398 fused_ordering(708) 00:19:09.398 fused_ordering(709) 00:19:09.398 fused_ordering(710) 00:19:09.398 fused_ordering(711) 00:19:09.398 fused_ordering(712) 00:19:09.398 fused_ordering(713) 00:19:09.398 fused_ordering(714) 00:19:09.398 fused_ordering(715) 00:19:09.398 fused_ordering(716) 00:19:09.398 fused_ordering(717) 00:19:09.398 fused_ordering(718) 00:19:09.398 fused_ordering(719) 00:19:09.398 fused_ordering(720) 00:19:09.398 fused_ordering(721) 00:19:09.399 fused_ordering(722) 00:19:09.399 fused_ordering(723) 00:19:09.399 fused_ordering(724) 00:19:09.399 fused_ordering(725) 00:19:09.399 fused_ordering(726) 00:19:09.399 fused_ordering(727) 00:19:09.399 fused_ordering(728) 00:19:09.399 fused_ordering(729) 00:19:09.399 fused_ordering(730) 00:19:09.399 fused_ordering(731) 00:19:09.399 fused_ordering(732) 00:19:09.399 fused_ordering(733) 00:19:09.399 fused_ordering(734) 00:19:09.399 fused_ordering(735) 00:19:09.399 fused_ordering(736) 00:19:09.399 fused_ordering(737) 00:19:09.399 fused_ordering(738) 00:19:09.399 fused_ordering(739) 00:19:09.399 fused_ordering(740) 00:19:09.399 fused_ordering(741) 00:19:09.399 fused_ordering(742) 00:19:09.399 fused_ordering(743) 00:19:09.399 fused_ordering(744) 00:19:09.399 fused_ordering(745) 00:19:09.399 fused_ordering(746) 00:19:09.399 fused_ordering(747) 00:19:09.399 fused_ordering(748) 00:19:09.399 fused_ordering(749) 00:19:09.399 fused_ordering(750) 00:19:09.399 fused_ordering(751) 00:19:09.399 fused_ordering(752) 00:19:09.399 fused_ordering(753) 00:19:09.399 fused_ordering(754) 00:19:09.399 fused_ordering(755) 00:19:09.399 fused_ordering(756) 00:19:09.399 fused_ordering(757) 00:19:09.399 fused_ordering(758) 00:19:09.399 fused_ordering(759) 00:19:09.399 fused_ordering(760) 00:19:09.399 fused_ordering(761) 00:19:09.399 fused_ordering(762) 00:19:09.399 fused_ordering(763) 00:19:09.399 fused_ordering(764) 00:19:09.399 fused_ordering(765) 00:19:09.399 fused_ordering(766) 00:19:09.399 fused_ordering(767) 00:19:09.399 fused_ordering(768) 00:19:09.399 fused_ordering(769) 00:19:09.399 fused_ordering(770) 00:19:09.399 fused_ordering(771) 00:19:09.399 fused_ordering(772) 00:19:09.399 fused_ordering(773) 00:19:09.399 fused_ordering(774) 00:19:09.399 fused_ordering(775) 00:19:09.399 fused_ordering(776) 00:19:09.399 fused_ordering(777) 00:19:09.399 fused_ordering(778) 00:19:09.399 fused_ordering(779) 00:19:09.399 fused_ordering(780) 00:19:09.399 fused_ordering(781) 00:19:09.399 fused_ordering(782) 00:19:09.399 fused_ordering(783) 00:19:09.399 fused_ordering(784) 00:19:09.399 fused_ordering(785) 00:19:09.399 fused_ordering(786) 00:19:09.399 fused_ordering(787) 00:19:09.399 fused_ordering(788) 00:19:09.399 fused_ordering(789) 00:19:09.399 fused_ordering(790) 00:19:09.399 fused_ordering(791) 00:19:09.399 fused_ordering(792) 00:19:09.399 fused_ordering(793) 00:19:09.399 fused_ordering(794) 00:19:09.399 fused_ordering(795) 00:19:09.399 fused_ordering(796) 00:19:09.399 fused_ordering(797) 00:19:09.399 fused_ordering(798) 00:19:09.399 fused_ordering(799) 00:19:09.399 fused_ordering(800) 00:19:09.399 fused_ordering(801) 00:19:09.399 fused_ordering(802) 00:19:09.399 fused_ordering(803) 00:19:09.399 fused_ordering(804) 00:19:09.399 fused_ordering(805) 00:19:09.399 fused_ordering(806) 00:19:09.399 fused_ordering(807) 00:19:09.399 fused_ordering(808) 00:19:09.399 fused_ordering(809) 00:19:09.399 fused_ordering(810) 00:19:09.399 fused_ordering(811) 00:19:09.399 fused_ordering(812) 00:19:09.399 fused_ordering(813) 00:19:09.399 fused_ordering(814) 00:19:09.399 fused_ordering(815) 00:19:09.399 fused_ordering(816) 00:19:09.399 fused_ordering(817) 00:19:09.399 fused_ordering(818) 00:19:09.399 fused_ordering(819) 00:19:09.399 fused_ordering(820) 00:19:10.341 fused_ordering(821) 00:19:10.341 fused_ordering(822) 00:19:10.341 fused_ordering(823) 00:19:10.341 fused_ordering(824) 00:19:10.341 fused_ordering(825) 00:19:10.341 fused_ordering(826) 00:19:10.341 fused_ordering(827) 00:19:10.341 fused_ordering(828) 00:19:10.341 fused_ordering(829) 00:19:10.341 fused_ordering(830) 00:19:10.341 fused_ordering(831) 00:19:10.341 fused_ordering(832) 00:19:10.341 fused_ordering(833) 00:19:10.341 fused_ordering(834) 00:19:10.341 fused_ordering(835) 00:19:10.341 fused_ordering(836) 00:19:10.341 fused_ordering(837) 00:19:10.341 fused_ordering(838) 00:19:10.341 fused_ordering(839) 00:19:10.341 fused_ordering(840) 00:19:10.341 fused_ordering(841) 00:19:10.341 fused_ordering(842) 00:19:10.341 fused_ordering(843) 00:19:10.341 fused_ordering(844) 00:19:10.341 fused_ordering(845) 00:19:10.341 fused_ordering(846) 00:19:10.341 fused_ordering(847) 00:19:10.341 fused_ordering(848) 00:19:10.341 fused_ordering(849) 00:19:10.341 fused_ordering(850) 00:19:10.341 fused_ordering(851) 00:19:10.341 fused_ordering(852) 00:19:10.341 fused_ordering(853) 00:19:10.341 fused_ordering(854) 00:19:10.341 fused_ordering(855) 00:19:10.341 fused_ordering(856) 00:19:10.341 fused_ordering(857) 00:19:10.341 fused_ordering(858) 00:19:10.341 fused_ordering(859) 00:19:10.341 fused_ordering(860) 00:19:10.341 fused_ordering(861) 00:19:10.341 fused_ordering(862) 00:19:10.341 fused_ordering(863) 00:19:10.341 fused_ordering(864) 00:19:10.341 fused_ordering(865) 00:19:10.341 fused_ordering(866) 00:19:10.341 fused_ordering(867) 00:19:10.341 fused_ordering(868) 00:19:10.341 fused_ordering(869) 00:19:10.341 fused_ordering(870) 00:19:10.341 fused_ordering(871) 00:19:10.341 fused_ordering(872) 00:19:10.341 fused_ordering(873) 00:19:10.341 fused_ordering(874) 00:19:10.341 fused_ordering(875) 00:19:10.341 fused_ordering(876) 00:19:10.341 fused_ordering(877) 00:19:10.341 fused_ordering(878) 00:19:10.341 fused_ordering(879) 00:19:10.341 fused_ordering(880) 00:19:10.341 fused_ordering(881) 00:19:10.341 fused_ordering(882) 00:19:10.341 fused_ordering(883) 00:19:10.341 fused_ordering(884) 00:19:10.341 fused_ordering(885) 00:19:10.341 fused_ordering(886) 00:19:10.341 fused_ordering(887) 00:19:10.341 fused_ordering(888) 00:19:10.341 fused_ordering(889) 00:19:10.341 fused_ordering(890) 00:19:10.341 fused_ordering(891) 00:19:10.341 fused_ordering(892) 00:19:10.341 fused_ordering(893) 00:19:10.341 fused_ordering(894) 00:19:10.341 fused_ordering(895) 00:19:10.341 fused_ordering(896) 00:19:10.341 fused_ordering(897) 00:19:10.341 fused_ordering(898) 00:19:10.341 fused_ordering(899) 00:19:10.341 fused_ordering(900) 00:19:10.341 fused_ordering(901) 00:19:10.341 fused_ordering(902) 00:19:10.341 fused_ordering(903) 00:19:10.341 fused_ordering(904) 00:19:10.341 fused_ordering(905) 00:19:10.341 fused_ordering(906) 00:19:10.341 fused_ordering(907) 00:19:10.341 fused_ordering(908) 00:19:10.341 fused_ordering(909) 00:19:10.341 fused_ordering(910) 00:19:10.341 fused_ordering(911) 00:19:10.341 fused_ordering(912) 00:19:10.341 fused_ordering(913) 00:19:10.341 fused_ordering(914) 00:19:10.341 fused_ordering(915) 00:19:10.341 fused_ordering(916) 00:19:10.341 fused_ordering(917) 00:19:10.341 fused_ordering(918) 00:19:10.341 fused_ordering(919) 00:19:10.341 fused_ordering(920) 00:19:10.341 fused_ordering(921) 00:19:10.341 fused_ordering(922) 00:19:10.341 fused_ordering(923) 00:19:10.341 fused_ordering(924) 00:19:10.341 fused_ordering(925) 00:19:10.341 fused_ordering(926) 00:19:10.341 fused_ordering(927) 00:19:10.341 fused_ordering(928) 00:19:10.341 fused_ordering(929) 00:19:10.341 fused_ordering(930) 00:19:10.341 fused_ordering(931) 00:19:10.341 fused_ordering(932) 00:19:10.341 fused_ordering(933) 00:19:10.341 fused_ordering(934) 00:19:10.341 fused_ordering(935) 00:19:10.341 fused_ordering(936) 00:19:10.341 fused_ordering(937) 00:19:10.341 fused_ordering(938) 00:19:10.341 fused_ordering(939) 00:19:10.341 fused_ordering(940) 00:19:10.341 fused_ordering(941) 00:19:10.341 fused_ordering(942) 00:19:10.341 fused_ordering(943) 00:19:10.341 fused_ordering(944) 00:19:10.341 fused_ordering(945) 00:19:10.341 fused_ordering(946) 00:19:10.341 fused_ordering(947) 00:19:10.341 fused_ordering(948) 00:19:10.341 fused_ordering(949) 00:19:10.341 fused_ordering(950) 00:19:10.341 fused_ordering(951) 00:19:10.341 fused_ordering(952) 00:19:10.341 fused_ordering(953) 00:19:10.341 fused_ordering(954) 00:19:10.341 fused_ordering(955) 00:19:10.341 fused_ordering(956) 00:19:10.341 fused_ordering(957) 00:19:10.341 fused_ordering(958) 00:19:10.341 fused_ordering(959) 00:19:10.341 fused_ordering(960) 00:19:10.341 fused_ordering(961) 00:19:10.341 fused_ordering(962) 00:19:10.341 fused_ordering(963) 00:19:10.341 fused_ordering(964) 00:19:10.341 fused_ordering(965) 00:19:10.341 fused_ordering(966) 00:19:10.341 fused_ordering(967) 00:19:10.341 fused_ordering(968) 00:19:10.341 fused_ordering(969) 00:19:10.341 fused_ordering(970) 00:19:10.341 fused_ordering(971) 00:19:10.341 fused_ordering(972) 00:19:10.341 fused_ordering(973) 00:19:10.341 fused_ordering(974) 00:19:10.341 fused_ordering(975) 00:19:10.341 fused_ordering(976) 00:19:10.341 fused_ordering(977) 00:19:10.341 fused_ordering(978) 00:19:10.341 fused_ordering(979) 00:19:10.341 fused_ordering(980) 00:19:10.341 fused_ordering(981) 00:19:10.341 fused_ordering(982) 00:19:10.341 fused_ordering(983) 00:19:10.341 fused_ordering(984) 00:19:10.341 fused_ordering(985) 00:19:10.341 fused_ordering(986) 00:19:10.341 fused_ordering(987) 00:19:10.341 fused_ordering(988) 00:19:10.341 fused_ordering(989) 00:19:10.341 fused_ordering(990) 00:19:10.341 fused_ordering(991) 00:19:10.341 fused_ordering(992) 00:19:10.341 fused_ordering(993) 00:19:10.341 fused_ordering(994) 00:19:10.341 fused_ordering(995) 00:19:10.341 fused_ordering(996) 00:19:10.341 fused_ordering(997) 00:19:10.341 fused_ordering(998) 00:19:10.341 fused_ordering(999) 00:19:10.341 fused_ordering(1000) 00:19:10.341 fused_ordering(1001) 00:19:10.341 fused_ordering(1002) 00:19:10.341 fused_ordering(1003) 00:19:10.341 fused_ordering(1004) 00:19:10.341 fused_ordering(1005) 00:19:10.341 fused_ordering(1006) 00:19:10.341 fused_ordering(1007) 00:19:10.341 fused_ordering(1008) 00:19:10.341 fused_ordering(1009) 00:19:10.341 fused_ordering(1010) 00:19:10.341 fused_ordering(1011) 00:19:10.341 fused_ordering(1012) 00:19:10.341 fused_ordering(1013) 00:19:10.341 fused_ordering(1014) 00:19:10.341 fused_ordering(1015) 00:19:10.341 fused_ordering(1016) 00:19:10.341 fused_ordering(1017) 00:19:10.341 fused_ordering(1018) 00:19:10.341 fused_ordering(1019) 00:19:10.341 fused_ordering(1020) 00:19:10.341 fused_ordering(1021) 00:19:10.341 fused_ordering(1022) 00:19:10.341 fused_ordering(1023) 00:19:10.341 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:19:10.342 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:19:10.342 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:10.342 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:19:10.342 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:10.342 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:19:10.342 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:10.342 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:10.342 rmmod nvme_tcp 00:19:10.342 rmmod nvme_fabrics 00:19:10.342 rmmod nvme_keyring 00:19:10.342 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:10.342 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:19:10.342 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:19:10.342 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3369576 ']' 00:19:10.342 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3369576 00:19:10.342 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3369576 ']' 00:19:10.342 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3369576 00:19:10.342 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:19:10.342 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:10.342 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3369576 00:19:10.342 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:10.342 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:10.342 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3369576' 00:19:10.342 killing process with pid 3369576 00:19:10.342 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3369576 00:19:10.342 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3369576 00:19:10.603 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:10.603 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:10.603 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:10.603 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:19:10.603 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:19:10.603 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:10.603 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:19:10.603 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:10.603 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:10.603 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.603 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:10.603 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.516 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:12.516 00:19:12.516 real 0m13.638s 00:19:12.516 user 0m7.111s 00:19:12.516 sys 0m7.451s 00:19:12.516 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:12.516 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:12.516 ************************************ 00:19:12.516 END TEST nvmf_fused_ordering 00:19:12.516 ************************************ 00:19:12.516 14:17:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:19:12.516 14:17:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:12.516 14:17:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:12.516 14:17:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:12.780 ************************************ 00:19:12.780 START TEST nvmf_ns_masking 00:19:12.780 ************************************ 00:19:12.780 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:19:12.780 * Looking for test storage... 00:19:12.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:12.780 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:12.780 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:19:12.780 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:12.780 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:12.780 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:12.780 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:12.780 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:12.780 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:19:12.780 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:19:12.780 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:19:12.780 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:19:12.780 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:19:12.780 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:19:12.781 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:19:12.781 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:12.781 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:19:12.781 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:19:12.781 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:12.781 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:12.781 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:19:12.781 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:19:12.781 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:12.781 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:19:12.781 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:19:12.781 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:19:12.781 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:19:12.781 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:12.781 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:19:12.781 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:19:12.781 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:12.781 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:12.781 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:19:12.781 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:12.781 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:12.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.781 --rc genhtml_branch_coverage=1 00:19:12.781 --rc genhtml_function_coverage=1 00:19:12.781 --rc genhtml_legend=1 00:19:12.781 --rc geninfo_all_blocks=1 00:19:12.781 --rc geninfo_unexecuted_blocks=1 00:19:12.781 00:19:12.781 ' 00:19:12.781 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:12.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.782 --rc genhtml_branch_coverage=1 00:19:12.782 --rc genhtml_function_coverage=1 00:19:12.782 --rc genhtml_legend=1 00:19:12.782 --rc geninfo_all_blocks=1 00:19:12.782 --rc geninfo_unexecuted_blocks=1 00:19:12.782 00:19:12.782 ' 00:19:12.782 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:12.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.782 --rc genhtml_branch_coverage=1 00:19:12.782 --rc genhtml_function_coverage=1 00:19:12.782 --rc genhtml_legend=1 00:19:12.782 --rc geninfo_all_blocks=1 00:19:12.782 --rc geninfo_unexecuted_blocks=1 00:19:12.782 00:19:12.782 ' 00:19:12.782 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:12.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.782 --rc genhtml_branch_coverage=1 00:19:12.782 --rc genhtml_function_coverage=1 00:19:12.782 --rc genhtml_legend=1 00:19:12.782 --rc geninfo_all_blocks=1 00:19:12.782 --rc geninfo_unexecuted_blocks=1 00:19:12.782 00:19:12.782 ' 00:19:12.782 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:12.782 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:19:12.782 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:12.782 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:12.782 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:12.782 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:12.782 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:12.782 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:12.782 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:12.782 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:12.782 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:12.782 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:12.783 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:12.783 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:12.783 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:12.783 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:12.783 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:12.783 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:12.783 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:12.783 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:19:12.783 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:12.783 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:12.783 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.051 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:13.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=547552e6-0f5a-49d8-8263-b2691a0b6b8d 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=a9a8738f-e268-4dfd-bf24-46b931a9f4a5 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=ffa129dc-9196-4d53-a9e5-d80dc99e144b 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:19:13.052 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:21.197 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:21.197 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:19:21.197 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:21.197 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:21.198 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:21.198 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:21.198 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:21.198 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:21.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:21.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:19:21.198 00:19:21.198 --- 10.0.0.2 ping statistics --- 00:19:21.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.198 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:21.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:21.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:19:21.198 00:19:21.198 --- 10.0.0.1 ping statistics --- 00:19:21.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.198 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:19:21.198 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:21.199 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:21.199 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:21.199 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:21.199 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:21.199 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:21.199 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:21.199 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:19:21.199 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:21.199 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:21.199 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:21.199 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3374577 00:19:21.199 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3374577 00:19:21.199 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:21.199 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3374577 ']' 00:19:21.199 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.199 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:21.199 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.199 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:21.199 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:21.199 [2024-11-25 14:17:25.479909] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:19:21.199 [2024-11-25 14:17:25.479975] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:21.199 [2024-11-25 14:17:25.578758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.199 [2024-11-25 14:17:25.629887] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:21.199 [2024-11-25 14:17:25.629941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:21.199 [2024-11-25 14:17:25.629950] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:21.199 [2024-11-25 14:17:25.629957] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:21.199 [2024-11-25 14:17:25.629963] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:21.199 [2024-11-25 14:17:25.630762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.462 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.462 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:19:21.462 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:21.462 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:21.462 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:21.462 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.462 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:21.462 [2024-11-25 14:17:26.501240] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.462 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:19:21.462 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:19:21.462 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:21.723 Malloc1 00:19:21.723 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:21.984 Malloc2 00:19:21.984 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:22.245 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:19:22.245 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:22.506 [2024-11-25 14:17:27.468141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.506 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:19:22.506 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ffa129dc-9196-4d53-a9e5-d80dc99e144b -a 10.0.0.2 -s 4420 -i 4 00:19:22.767 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:19:22.767 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:22.767 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:22.767 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:22.767 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:24.682 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:24.682 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:24.682 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:24.682 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:24.682 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:24.682 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:24.682 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:24.682 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:24.943 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:24.943 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:24.943 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:19:24.943 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:24.943 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:24.943 [ 0]:0x1 00:19:24.943 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:24.943 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:24.943 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f4e816c83b7f4c58bbd92381b925bcea 00:19:24.943 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f4e816c83b7f4c58bbd92381b925bcea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:24.943 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:19:24.943 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:19:24.943 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:24.943 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:24.943 [ 0]:0x1 00:19:24.943 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:24.943 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:25.203 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f4e816c83b7f4c58bbd92381b925bcea 00:19:25.203 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f4e816c83b7f4c58bbd92381b925bcea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:25.203 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:19:25.204 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:25.204 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:25.204 [ 1]:0x2 00:19:25.204 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:25.204 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:25.204 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd2a6810bc79442a93237b46bb6c670a 00:19:25.204 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd2a6810bc79442a93237b46bb6c670a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:25.204 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:19:25.204 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:25.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:25.464 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:25.724 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:19:25.724 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:19:25.724 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ffa129dc-9196-4d53-a9e5-d80dc99e144b -a 10.0.0.2 -s 4420 -i 4 00:19:25.984 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:19:25.984 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:25.984 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:25.984 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:19:25.984 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:19:25.984 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:28.018 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:28.018 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:28.018 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:28.018 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:28.018 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:28.018 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:28.018 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:28.018 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:28.018 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:28.018 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:28.018 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:19:28.018 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:28.018 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:28.018 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:28.018 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.018 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:28.018 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.018 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:28.018 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:28.018 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:28.018 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:28.018 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:28.279 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:28.279 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:28.279 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:28.279 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:28.279 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:28.279 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:28.279 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:19:28.279 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:28.279 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:28.279 [ 0]:0x2 00:19:28.279 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:28.279 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:28.279 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd2a6810bc79442a93237b46bb6c670a 00:19:28.279 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd2a6810bc79442a93237b46bb6c670a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:28.279 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:28.539 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:19:28.539 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:28.539 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:28.539 [ 0]:0x1 00:19:28.539 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:28.539 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:28.539 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f4e816c83b7f4c58bbd92381b925bcea 00:19:28.539 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f4e816c83b7f4c58bbd92381b925bcea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:28.539 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:19:28.539 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:28.539 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:28.540 [ 1]:0x2 00:19:28.540 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:28.540 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:28.540 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd2a6810bc79442a93237b46bb6c670a 00:19:28.540 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd2a6810bc79442a93237b46bb6c670a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:28.540 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:28.800 [ 0]:0x2 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd2a6810bc79442a93237b46bb6c670a 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd2a6810bc79442a93237b46bb6c670a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:19:28.800 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:29.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:29.060 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:29.060 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:19:29.060 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ffa129dc-9196-4d53-a9e5-d80dc99e144b -a 10.0.0.2 -s 4420 -i 4 00:19:29.322 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:29.322 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:29.322 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:29.322 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:19:29.322 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:19:29.322 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:31.232 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:31.232 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:31.232 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:31.232 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:19:31.232 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:31.232 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:31.232 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:31.232 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:31.492 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:31.492 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:31.492 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:19:31.492 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:31.492 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:31.492 [ 0]:0x1 00:19:31.492 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:31.492 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:31.492 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f4e816c83b7f4c58bbd92381b925bcea 00:19:31.492 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f4e816c83b7f4c58bbd92381b925bcea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:31.492 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:19:31.492 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:31.492 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:31.492 [ 1]:0x2 00:19:31.492 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:31.492 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:31.492 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd2a6810bc79442a93237b46bb6c670a 00:19:31.492 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd2a6810bc79442a93237b46bb6c670a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:31.492 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:31.751 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:19:31.751 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:31.751 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:31.751 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:31.751 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.751 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:31.751 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:31.751 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:31.751 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:31.751 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:31.751 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:31.751 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:31.751 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:31.751 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:31.751 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:31.751 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:31.751 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:31.751 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:31.751 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:19:31.751 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:31.751 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:31.751 [ 0]:0x2 00:19:31.751 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:31.751 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:32.066 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd2a6810bc79442a93237b46bb6c670a 00:19:32.066 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd2a6810bc79442a93237b46bb6c670a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:32.066 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:32.066 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:32.066 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:32.066 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:32.066 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.066 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:32.066 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.066 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:32.066 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.066 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:32.066 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:32.066 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:32.066 [2024-11-25 14:17:37.042239] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:19:32.066 request: 00:19:32.066 { 00:19:32.066 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.066 "nsid": 2, 00:19:32.066 "host": "nqn.2016-06.io.spdk:host1", 00:19:32.066 "method": "nvmf_ns_remove_host", 00:19:32.066 "req_id": 1 00:19:32.066 } 00:19:32.066 Got JSON-RPC error response 00:19:32.066 response: 00:19:32.066 { 00:19:32.066 "code": -32602, 00:19:32.066 "message": "Invalid parameters" 00:19:32.066 } 00:19:32.066 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:32.066 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:32.066 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:32.066 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:32.066 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:19:32.066 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:32.066 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:32.066 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:32.066 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.066 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:32.066 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.066 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:32.066 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:32.066 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:32.067 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:32.067 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:32.067 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:32.067 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:32.067 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:32.067 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:32.067 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:32.067 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:32.067 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:19:32.067 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:32.067 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:32.067 [ 0]:0x2 00:19:32.067 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:32.067 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:32.326 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fd2a6810bc79442a93237b46bb6c670a 00:19:32.326 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fd2a6810bc79442a93237b46bb6c670a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:32.326 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:19:32.326 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:32.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:32.326 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3377062 00:19:32.326 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.326 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:19:32.326 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3377062 /var/tmp/host.sock 00:19:32.326 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3377062 ']' 00:19:32.326 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:32.326 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.326 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:32.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:32.326 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.326 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:32.326 [2024-11-25 14:17:37.288191] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:19:32.326 [2024-11-25 14:17:37.288242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3377062 ] 00:19:32.326 [2024-11-25 14:17:37.374547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.326 [2024-11-25 14:17:37.410405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.264 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:33.264 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:19:33.264 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:33.264 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:33.524 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 547552e6-0f5a-49d8-8263-b2691a0b6b8d 00:19:33.524 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:33.524 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 547552E60F5A49D88263B2691A0B6B8D -i 00:19:33.786 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid a9a8738f-e268-4dfd-bf24-46b931a9f4a5 00:19:33.786 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:33.786 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g A9A8738FE2684DFDBF2446B931A9F4A5 -i 00:19:33.786 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:34.046 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:19:34.305 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:34.305 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:34.565 nvme0n1 00:19:34.565 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:34.565 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:34.825 nvme1n2 00:19:34.825 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:19:34.825 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:19:34.825 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:34.825 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:19:34.825 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:19:35.086 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:19:35.086 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:19:35.086 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:19:35.086 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:19:35.347 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 547552e6-0f5a-49d8-8263-b2691a0b6b8d == \5\4\7\5\5\2\e\6\-\0\f\5\a\-\4\9\d\8\-\8\2\6\3\-\b\2\6\9\1\a\0\b\6\b\8\d ]] 00:19:35.347 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:19:35.347 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:19:35.347 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:19:35.347 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ a9a8738f-e268-4dfd-bf24-46b931a9f4a5 == \a\9\a\8\7\3\8\f\-\e\2\6\8\-\4\d\f\d\-\b\f\2\4\-\4\6\b\9\3\1\a\9\f\4\a\5 ]] 00:19:35.347 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:35.608 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:35.870 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 547552e6-0f5a-49d8-8263-b2691a0b6b8d 00:19:35.870 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:35.870 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 547552E60F5A49D88263B2691A0B6B8D 00:19:35.870 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:35.870 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 547552E60F5A49D88263B2691A0B6B8D 00:19:35.870 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:35.870 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:35.870 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:35.870 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:35.870 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:35.870 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:35.870 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:35.870 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:35.870 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 547552E60F5A49D88263B2691A0B6B8D 00:19:35.870 [2024-11-25 14:17:40.904395] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:19:35.870 [2024-11-25 14:17:40.904422] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:19:35.870 [2024-11-25 14:17:40.904429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:35.870 request: 00:19:35.870 { 00:19:35.870 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.870 "namespace": { 00:19:35.870 "bdev_name": "invalid", 00:19:35.870 "nsid": 1, 00:19:35.870 "nguid": "547552E60F5A49D88263B2691A0B6B8D", 00:19:35.870 "no_auto_visible": false 00:19:35.870 }, 00:19:35.870 "method": "nvmf_subsystem_add_ns", 00:19:35.870 "req_id": 1 00:19:35.870 } 00:19:35.870 Got JSON-RPC error response 00:19:35.870 response: 00:19:35.870 { 00:19:35.870 "code": -32602, 00:19:35.870 "message": "Invalid parameters" 00:19:35.870 } 00:19:35.870 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:35.870 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:35.871 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:35.871 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:35.871 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 547552e6-0f5a-49d8-8263-b2691a0b6b8d 00:19:35.871 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:19:35.871 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 547552E60F5A49D88263B2691A0B6B8D -i 00:19:36.131 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:19:38.044 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:19:38.044 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:19:38.044 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:38.306 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:19:38.306 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3377062 00:19:38.306 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3377062 ']' 00:19:38.306 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3377062 00:19:38.306 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:19:38.306 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.306 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3377062 00:19:38.306 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:38.306 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:38.306 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3377062' 00:19:38.306 killing process with pid 3377062 00:19:38.306 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3377062 00:19:38.306 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3377062 00:19:38.566 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:38.828 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:38.828 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:19:38.828 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:38.828 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:19:38.828 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:38.828 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:19:38.828 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:38.828 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:38.828 rmmod nvme_tcp 00:19:38.828 rmmod nvme_fabrics 00:19:38.828 rmmod nvme_keyring 00:19:38.828 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:38.828 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:19:38.828 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:19:38.828 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3374577 ']' 00:19:38.828 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3374577 00:19:38.828 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3374577 ']' 00:19:38.828 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3374577 00:19:38.828 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:19:38.828 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.828 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3374577 00:19:38.828 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:38.828 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:38.828 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3374577' 00:19:38.828 killing process with pid 3374577 00:19:38.828 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3374577 00:19:38.828 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3374577 00:19:39.090 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:39.090 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:39.090 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:39.090 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:19:39.090 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:19:39.090 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:39.090 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:19:39.090 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:39.090 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:39.090 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.090 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:39.090 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.001 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:41.001 00:19:41.001 real 0m28.438s 00:19:41.001 user 0m32.236s 00:19:41.001 sys 0m8.274s 00:19:41.001 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:41.001 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:41.001 ************************************ 00:19:41.001 END TEST nvmf_ns_masking 00:19:41.001 ************************************ 00:19:41.262 14:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:19:41.262 14:17:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:41.262 14:17:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:41.262 14:17:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:41.262 14:17:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:41.262 ************************************ 00:19:41.262 START TEST nvmf_nvme_cli 00:19:41.262 ************************************ 00:19:41.262 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:41.262 * Looking for test storage... 00:19:41.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:41.262 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:41.262 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:19:41.262 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:41.262 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:41.262 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:41.262 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:41.262 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:41.262 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:19:41.263 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:19:41.263 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:19:41.263 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:19:41.263 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:19:41.263 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:19:41.263 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:19:41.263 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:41.263 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:19:41.263 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:19:41.263 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:41.263 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:41.263 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:19:41.263 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:19:41.263 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:41.263 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:19:41.263 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:19:41.263 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:19:41.263 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:19:41.263 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:41.263 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:41.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.524 --rc genhtml_branch_coverage=1 00:19:41.524 --rc genhtml_function_coverage=1 00:19:41.524 --rc genhtml_legend=1 00:19:41.524 --rc geninfo_all_blocks=1 00:19:41.524 --rc geninfo_unexecuted_blocks=1 00:19:41.524 00:19:41.524 ' 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:41.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.524 --rc genhtml_branch_coverage=1 00:19:41.524 --rc genhtml_function_coverage=1 00:19:41.524 --rc genhtml_legend=1 00:19:41.524 --rc geninfo_all_blocks=1 00:19:41.524 --rc geninfo_unexecuted_blocks=1 00:19:41.524 00:19:41.524 ' 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:41.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.524 --rc genhtml_branch_coverage=1 00:19:41.524 --rc genhtml_function_coverage=1 00:19:41.524 --rc genhtml_legend=1 00:19:41.524 --rc geninfo_all_blocks=1 00:19:41.524 --rc geninfo_unexecuted_blocks=1 00:19:41.524 00:19:41.524 ' 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:41.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.524 --rc genhtml_branch_coverage=1 00:19:41.524 --rc genhtml_function_coverage=1 00:19:41.524 --rc genhtml_legend=1 00:19:41.524 --rc geninfo_all_blocks=1 00:19:41.524 --rc geninfo_unexecuted_blocks=1 00:19:41.524 00:19:41.524 ' 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.524 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:41.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:19:41.525 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:49.677 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:49.677 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:19:49.677 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:49.677 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:49.677 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:49.677 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:49.677 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:49.677 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:19:49.677 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:49.677 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:19:49.677 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:19:49.677 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:19:49.677 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:19:49.677 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:49.678 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:49.678 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:49.678 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:49.678 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:49.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:19:49.678 00:19:49.678 --- 10.0.0.2 ping statistics --- 00:19:49.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.678 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:49.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:19:49.678 00:19:49.678 --- 10.0.0.1 ping statistics --- 00:19:49.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.678 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3382479 00:19:49.678 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3382479 00:19:49.679 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:49.679 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3382479 ']' 00:19:49.679 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.679 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.679 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.679 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.679 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:49.679 [2024-11-25 14:17:53.997439] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:19:49.679 [2024-11-25 14:17:53.997507] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.679 [2024-11-25 14:17:54.100702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:49.679 [2024-11-25 14:17:54.155821] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.679 [2024-11-25 14:17:54.155877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.679 [2024-11-25 14:17:54.155886] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:49.679 [2024-11-25 14:17:54.155894] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:49.679 [2024-11-25 14:17:54.155900] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.679 [2024-11-25 14:17:54.157903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.679 [2024-11-25 14:17:54.158041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.679 [2024-11-25 14:17:54.158218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:49.679 [2024-11-25 14:17:54.158218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:49.942 [2024-11-25 14:17:54.878540] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:49.942 Malloc0 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:49.942 Malloc1 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.942 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:49.942 [2024-11-25 14:17:55.000135] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.942 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.942 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:49.942 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.942 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:49.942 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.942 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:19:50.204 00:19:50.204 Discovery Log Number of Records 2, Generation counter 2 00:19:50.204 =====Discovery Log Entry 0====== 00:19:50.204 trtype: tcp 00:19:50.204 adrfam: ipv4 00:19:50.204 subtype: current discovery subsystem 00:19:50.204 treq: not required 00:19:50.204 portid: 0 00:19:50.204 trsvcid: 4420 00:19:50.204 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:50.204 traddr: 10.0.0.2 00:19:50.204 eflags: explicit discovery connections, duplicate discovery information 00:19:50.204 sectype: none 00:19:50.204 =====Discovery Log Entry 1====== 00:19:50.204 trtype: tcp 00:19:50.204 adrfam: ipv4 00:19:50.204 subtype: nvme subsystem 00:19:50.204 treq: not required 00:19:50.204 portid: 0 00:19:50.204 trsvcid: 4420 00:19:50.204 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:50.204 traddr: 10.0.0.2 00:19:50.204 eflags: none 00:19:50.204 sectype: none 00:19:50.204 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:19:50.204 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:19:50.204 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:50.204 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:50.204 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:50.204 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:50.204 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:50.204 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:50.204 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:50.204 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:19:50.204 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:52.120 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:52.120 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:19:52.120 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:52.120 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:19:52.120 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:19:52.120 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:19:54.061 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:54.061 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:54.061 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:54.061 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:19:54.061 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:54.061 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:19:54.061 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:54.061 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:54.061 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:54.061 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:54.061 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:54.061 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:54.061 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:54.061 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:54.061 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:54.061 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:54.061 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:54.061 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:54.061 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:54.061 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:54.061 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:19:54.061 /dev/nvme0n2 ]] 00:19:54.061 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:54.061 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:54.061 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:54.061 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:54.062 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:54.062 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:54.062 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:54.062 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:54.062 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:54.062 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:54.062 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:54.062 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:54.062 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:54.062 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:54.062 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:54.062 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:19:54.062 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:54.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:54.331 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:54.331 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:19:54.331 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:54.331 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:54.331 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:54.331 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:54.331 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:19:54.331 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:54.331 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:54.331 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.331 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:54.331 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.331 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:54.331 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:54.331 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:54.331 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:19:54.331 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:54.331 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:19:54.331 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:54.331 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:54.331 rmmod nvme_tcp 00:19:54.331 rmmod nvme_fabrics 00:19:54.591 rmmod nvme_keyring 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3382479 ']' 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3382479 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3382479 ']' 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3382479 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3382479 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3382479' 00:19:54.592 killing process with pid 3382479 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3382479 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3382479 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:54.592 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.138 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:57.138 00:19:57.138 real 0m15.590s 00:19:57.138 user 0m24.204s 00:19:57.138 sys 0m6.451s 00:19:57.138 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:57.138 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:57.138 ************************************ 00:19:57.138 END TEST nvmf_nvme_cli 00:19:57.138 ************************************ 00:19:57.139 14:18:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:19:57.139 14:18:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:57.139 14:18:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:57.139 14:18:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:57.139 14:18:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:57.139 ************************************ 00:19:57.139 START TEST nvmf_vfio_user 00:19:57.139 ************************************ 00:19:57.139 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:57.139 * Looking for test storage... 00:19:57.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:57.139 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:57.139 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:19:57.139 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:57.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.139 --rc genhtml_branch_coverage=1 00:19:57.139 --rc genhtml_function_coverage=1 00:19:57.139 --rc genhtml_legend=1 00:19:57.139 --rc geninfo_all_blocks=1 00:19:57.139 --rc geninfo_unexecuted_blocks=1 00:19:57.139 00:19:57.139 ' 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:57.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.139 --rc genhtml_branch_coverage=1 00:19:57.139 --rc genhtml_function_coverage=1 00:19:57.139 --rc genhtml_legend=1 00:19:57.139 --rc geninfo_all_blocks=1 00:19:57.139 --rc geninfo_unexecuted_blocks=1 00:19:57.139 00:19:57.139 ' 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:57.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.139 --rc genhtml_branch_coverage=1 00:19:57.139 --rc genhtml_function_coverage=1 00:19:57.139 --rc genhtml_legend=1 00:19:57.139 --rc geninfo_all_blocks=1 00:19:57.139 --rc geninfo_unexecuted_blocks=1 00:19:57.139 00:19:57.139 ' 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:57.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.139 --rc genhtml_branch_coverage=1 00:19:57.139 --rc genhtml_function_coverage=1 00:19:57.139 --rc genhtml_legend=1 00:19:57.139 --rc geninfo_all_blocks=1 00:19:57.139 --rc geninfo_unexecuted_blocks=1 00:19:57.139 00:19:57.139 ' 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.139 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:57.140 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3384400 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3384400' 00:19:57.140 Process pid: 3384400 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3384400 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3384400 ']' 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.140 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:57.140 [2024-11-25 14:18:02.133513] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:19:57.140 [2024-11-25 14:18:02.133568] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.140 [2024-11-25 14:18:02.220054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:57.401 [2024-11-25 14:18:02.251079] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.401 [2024-11-25 14:18:02.251110] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.401 [2024-11-25 14:18:02.251115] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:57.401 [2024-11-25 14:18:02.251120] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:57.401 [2024-11-25 14:18:02.251124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.401 [2024-11-25 14:18:02.252487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.401 [2024-11-25 14:18:02.252643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.401 [2024-11-25 14:18:02.252795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.401 [2024-11-25 14:18:02.252797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:57.974 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:57.974 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:19:57.974 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:58.917 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:19:59.178 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:59.178 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:59.178 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:59.178 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:59.178 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:59.438 Malloc1 00:19:59.438 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:59.438 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:59.698 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:59.959 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:59.959 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:59.959 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:00.219 Malloc2 00:20:00.219 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:20:00.219 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:20:00.479 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:20:00.742 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:20:00.742 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:20:00.742 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:00.742 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:20:00.742 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:20:00.742 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:20:00.742 [2024-11-25 14:18:05.652294] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:20:00.742 [2024-11-25 14:18:05.652335] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3385090 ] 00:20:00.742 [2024-11-25 14:18:05.691452] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:20:00.742 [2024-11-25 14:18:05.700427] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:20:00.742 [2024-11-25 14:18:05.700445] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f330fef9000 00:20:00.742 [2024-11-25 14:18:05.701426] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:00.742 [2024-11-25 14:18:05.702427] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:00.742 [2024-11-25 14:18:05.703437] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:00.742 [2024-11-25 14:18:05.704445] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:00.742 [2024-11-25 14:18:05.705450] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:00.742 [2024-11-25 14:18:05.706454] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:00.742 [2024-11-25 14:18:05.707464] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:00.742 [2024-11-25 14:18:05.708467] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:00.742 [2024-11-25 14:18:05.709475] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:20:00.742 [2024-11-25 14:18:05.709484] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f330feee000 00:20:00.742 [2024-11-25 14:18:05.710396] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:20:00.742 [2024-11-25 14:18:05.723836] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:20:00.742 [2024-11-25 14:18:05.723857] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:20:00.742 [2024-11-25 14:18:05.726564] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:20:00.742 [2024-11-25 14:18:05.726596] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:20:00.742 [2024-11-25 14:18:05.726654] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:20:00.742 [2024-11-25 14:18:05.726664] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:20:00.742 [2024-11-25 14:18:05.726669] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:20:00.742 [2024-11-25 14:18:05.727568] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:20:00.742 [2024-11-25 14:18:05.727575] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:20:00.742 [2024-11-25 14:18:05.727580] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:20:00.742 [2024-11-25 14:18:05.728568] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:20:00.742 [2024-11-25 14:18:05.728575] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:20:00.742 [2024-11-25 14:18:05.728581] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:20:00.742 [2024-11-25 14:18:05.729579] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:20:00.742 [2024-11-25 14:18:05.729585] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:00.742 [2024-11-25 14:18:05.730584] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:20:00.742 [2024-11-25 14:18:05.730590] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:20:00.742 [2024-11-25 14:18:05.730593] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:20:00.742 [2024-11-25 14:18:05.730598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:00.742 [2024-11-25 14:18:05.730704] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:20:00.742 [2024-11-25 14:18:05.730707] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:00.742 [2024-11-25 14:18:05.730711] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:20:00.742 [2024-11-25 14:18:05.731591] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:20:00.742 [2024-11-25 14:18:05.732590] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:20:00.742 [2024-11-25 14:18:05.733600] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:20:00.742 [2024-11-25 14:18:05.734602] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:00.742 [2024-11-25 14:18:05.734664] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:00.742 [2024-11-25 14:18:05.735615] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:20:00.742 [2024-11-25 14:18:05.735621] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:00.742 [2024-11-25 14:18:05.735624] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:20:00.742 [2024-11-25 14:18:05.735639] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:20:00.742 [2024-11-25 14:18:05.735649] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:20:00.742 [2024-11-25 14:18:05.735661] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:00.742 [2024-11-25 14:18:05.735665] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:00.742 [2024-11-25 14:18:05.735668] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:00.742 [2024-11-25 14:18:05.735677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:00.743 [2024-11-25 14:18:05.735710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:20:00.743 [2024-11-25 14:18:05.735717] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:20:00.743 [2024-11-25 14:18:05.735722] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:20:00.743 [2024-11-25 14:18:05.735725] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:20:00.743 [2024-11-25 14:18:05.735729] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:20:00.743 [2024-11-25 14:18:05.735732] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:20:00.743 [2024-11-25 14:18:05.735735] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:20:00.743 [2024-11-25 14:18:05.735739] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:20:00.743 [2024-11-25 14:18:05.735744] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:20:00.743 [2024-11-25 14:18:05.735751] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:20:00.743 [2024-11-25 14:18:05.735761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:20:00.743 [2024-11-25 14:18:05.735769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.743 [2024-11-25 14:18:05.735777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.743 [2024-11-25 14:18:05.735783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.743 [2024-11-25 14:18:05.735789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.743 [2024-11-25 14:18:05.735793] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:20:00.743 [2024-11-25 14:18:05.735800] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:00.743 [2024-11-25 14:18:05.735807] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:20:00.743 [2024-11-25 14:18:05.735816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:20:00.743 [2024-11-25 14:18:05.735820] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:20:00.743 [2024-11-25 14:18:05.735824] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:00.743 [2024-11-25 14:18:05.735829] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:20:00.743 [2024-11-25 14:18:05.735833] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:20:00.743 [2024-11-25 14:18:05.735839] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:00.743 [2024-11-25 14:18:05.735850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:20:00.743 [2024-11-25 14:18:05.735892] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:20:00.743 [2024-11-25 14:18:05.735897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:20:00.743 [2024-11-25 14:18:05.735903] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:20:00.743 [2024-11-25 14:18:05.735906] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:20:00.743 [2024-11-25 14:18:05.735908] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:00.743 [2024-11-25 14:18:05.735913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:20:00.743 [2024-11-25 14:18:05.735922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:20:00.743 [2024-11-25 14:18:05.735929] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:20:00.743 [2024-11-25 14:18:05.735936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:20:00.743 [2024-11-25 14:18:05.735942] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:20:00.743 [2024-11-25 14:18:05.735947] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:00.743 [2024-11-25 14:18:05.735950] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:00.743 [2024-11-25 14:18:05.735952] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:00.743 [2024-11-25 14:18:05.735958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:00.743 [2024-11-25 14:18:05.735975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:20:00.743 [2024-11-25 14:18:05.735984] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:00.743 [2024-11-25 14:18:05.735989] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:00.743 [2024-11-25 14:18:05.735994] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:00.743 [2024-11-25 14:18:05.735997] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:00.743 [2024-11-25 14:18:05.736000] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:00.743 [2024-11-25 14:18:05.736004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:00.743 [2024-11-25 14:18:05.736015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:20:00.743 [2024-11-25 14:18:05.736021] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:00.743 [2024-11-25 14:18:05.736025] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:20:00.743 [2024-11-25 14:18:05.736032] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:20:00.743 [2024-11-25 14:18:05.736036] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:20:00.743 [2024-11-25 14:18:05.736040] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:00.743 [2024-11-25 14:18:05.736043] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:20:00.743 [2024-11-25 14:18:05.736047] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:20:00.743 [2024-11-25 14:18:05.736050] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:20:00.743 [2024-11-25 14:18:05.736053] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:20:00.743 [2024-11-25 14:18:05.736067] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:20:00.743 [2024-11-25 14:18:05.736076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:20:00.743 [2024-11-25 14:18:05.736084] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:20:00.743 [2024-11-25 14:18:05.736095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:20:00.743 [2024-11-25 14:18:05.736103] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:20:00.743 [2024-11-25 14:18:05.736109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:20:00.743 [2024-11-25 14:18:05.736118] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:00.743 [2024-11-25 14:18:05.736129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:20:00.743 [2024-11-25 14:18:05.736139] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:20:00.743 [2024-11-25 14:18:05.736143] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:20:00.743 [2024-11-25 14:18:05.736145] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:20:00.743 [2024-11-25 14:18:05.736148] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:20:00.743 [2024-11-25 14:18:05.736150] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:20:00.743 [2024-11-25 14:18:05.736155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:20:00.743 [2024-11-25 14:18:05.736164] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:20:00.743 [2024-11-25 14:18:05.736167] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:20:00.743 [2024-11-25 14:18:05.736170] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:00.743 [2024-11-25 14:18:05.736174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:20:00.743 [2024-11-25 14:18:05.736179] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:20:00.743 [2024-11-25 14:18:05.736182] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:00.743 [2024-11-25 14:18:05.736185] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:00.743 [2024-11-25 14:18:05.736189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:00.743 [2024-11-25 14:18:05.736194] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:20:00.743 [2024-11-25 14:18:05.736197] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:20:00.743 [2024-11-25 14:18:05.736200] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:00.743 [2024-11-25 14:18:05.736204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:20:00.744 [2024-11-25 14:18:05.736209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:20:00.744 [2024-11-25 14:18:05.736218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:20:00.744 [2024-11-25 14:18:05.736225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:20:00.744 [2024-11-25 14:18:05.736230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:20:00.744 ===================================================== 00:20:00.744 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:00.744 ===================================================== 00:20:00.744 Controller Capabilities/Features 00:20:00.744 ================================ 00:20:00.744 Vendor ID: 4e58 00:20:00.744 Subsystem Vendor ID: 4e58 00:20:00.744 Serial Number: SPDK1 00:20:00.744 Model Number: SPDK bdev Controller 00:20:00.744 Firmware Version: 25.01 00:20:00.744 Recommended Arb Burst: 6 00:20:00.744 IEEE OUI Identifier: 8d 6b 50 00:20:00.744 Multi-path I/O 00:20:00.744 May have multiple subsystem ports: Yes 00:20:00.744 May have multiple controllers: Yes 00:20:00.744 Associated with SR-IOV VF: No 00:20:00.744 Max Data Transfer Size: 131072 00:20:00.744 Max Number of Namespaces: 32 00:20:00.744 Max Number of I/O Queues: 127 00:20:00.744 NVMe Specification Version (VS): 1.3 00:20:00.744 NVMe Specification Version (Identify): 1.3 00:20:00.744 Maximum Queue Entries: 256 00:20:00.744 Contiguous Queues Required: Yes 00:20:00.744 Arbitration Mechanisms Supported 00:20:00.744 Weighted Round Robin: Not Supported 00:20:00.744 Vendor Specific: Not Supported 00:20:00.744 Reset Timeout: 15000 ms 00:20:00.744 Doorbell Stride: 4 bytes 00:20:00.744 NVM Subsystem Reset: Not Supported 00:20:00.744 Command Sets Supported 00:20:00.744 NVM Command Set: Supported 00:20:00.744 Boot Partition: Not Supported 00:20:00.744 Memory Page Size Minimum: 4096 bytes 00:20:00.744 Memory Page Size Maximum: 4096 bytes 00:20:00.744 Persistent Memory Region: Not Supported 00:20:00.744 Optional Asynchronous Events Supported 00:20:00.744 Namespace Attribute Notices: Supported 00:20:00.744 Firmware Activation Notices: Not Supported 00:20:00.744 ANA Change Notices: Not Supported 00:20:00.744 PLE Aggregate Log Change Notices: Not Supported 00:20:00.744 LBA Status Info Alert Notices: Not Supported 00:20:00.744 EGE Aggregate Log Change Notices: Not Supported 00:20:00.744 Normal NVM Subsystem Shutdown event: Not Supported 00:20:00.744 Zone Descriptor Change Notices: Not Supported 00:20:00.744 Discovery Log Change Notices: Not Supported 00:20:00.744 Controller Attributes 00:20:00.744 128-bit Host Identifier: Supported 00:20:00.744 Non-Operational Permissive Mode: Not Supported 00:20:00.744 NVM Sets: Not Supported 00:20:00.744 Read Recovery Levels: Not Supported 00:20:00.744 Endurance Groups: Not Supported 00:20:00.744 Predictable Latency Mode: Not Supported 00:20:00.744 Traffic Based Keep ALive: Not Supported 00:20:00.744 Namespace Granularity: Not Supported 00:20:00.744 SQ Associations: Not Supported 00:20:00.744 UUID List: Not Supported 00:20:00.744 Multi-Domain Subsystem: Not Supported 00:20:00.744 Fixed Capacity Management: Not Supported 00:20:00.744 Variable Capacity Management: Not Supported 00:20:00.744 Delete Endurance Group: Not Supported 00:20:00.744 Delete NVM Set: Not Supported 00:20:00.744 Extended LBA Formats Supported: Not Supported 00:20:00.744 Flexible Data Placement Supported: Not Supported 00:20:00.744 00:20:00.744 Controller Memory Buffer Support 00:20:00.744 ================================ 00:20:00.744 Supported: No 00:20:00.744 00:20:00.744 Persistent Memory Region Support 00:20:00.744 ================================ 00:20:00.744 Supported: No 00:20:00.744 00:20:00.744 Admin Command Set Attributes 00:20:00.744 ============================ 00:20:00.744 Security Send/Receive: Not Supported 00:20:00.744 Format NVM: Not Supported 00:20:00.744 Firmware Activate/Download: Not Supported 00:20:00.744 Namespace Management: Not Supported 00:20:00.744 Device Self-Test: Not Supported 00:20:00.744 Directives: Not Supported 00:20:00.744 NVMe-MI: Not Supported 00:20:00.744 Virtualization Management: Not Supported 00:20:00.744 Doorbell Buffer Config: Not Supported 00:20:00.744 Get LBA Status Capability: Not Supported 00:20:00.744 Command & Feature Lockdown Capability: Not Supported 00:20:00.744 Abort Command Limit: 4 00:20:00.744 Async Event Request Limit: 4 00:20:00.744 Number of Firmware Slots: N/A 00:20:00.744 Firmware Slot 1 Read-Only: N/A 00:20:00.744 Firmware Activation Without Reset: N/A 00:20:00.744 Multiple Update Detection Support: N/A 00:20:00.744 Firmware Update Granularity: No Information Provided 00:20:00.744 Per-Namespace SMART Log: No 00:20:00.744 Asymmetric Namespace Access Log Page: Not Supported 00:20:00.744 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:20:00.744 Command Effects Log Page: Supported 00:20:00.744 Get Log Page Extended Data: Supported 00:20:00.744 Telemetry Log Pages: Not Supported 00:20:00.744 Persistent Event Log Pages: Not Supported 00:20:00.744 Supported Log Pages Log Page: May Support 00:20:00.744 Commands Supported & Effects Log Page: Not Supported 00:20:00.744 Feature Identifiers & Effects Log Page:May Support 00:20:00.744 NVMe-MI Commands & Effects Log Page: May Support 00:20:00.744 Data Area 4 for Telemetry Log: Not Supported 00:20:00.744 Error Log Page Entries Supported: 128 00:20:00.744 Keep Alive: Supported 00:20:00.744 Keep Alive Granularity: 10000 ms 00:20:00.744 00:20:00.744 NVM Command Set Attributes 00:20:00.744 ========================== 00:20:00.744 Submission Queue Entry Size 00:20:00.744 Max: 64 00:20:00.744 Min: 64 00:20:00.744 Completion Queue Entry Size 00:20:00.744 Max: 16 00:20:00.744 Min: 16 00:20:00.744 Number of Namespaces: 32 00:20:00.744 Compare Command: Supported 00:20:00.744 Write Uncorrectable Command: Not Supported 00:20:00.744 Dataset Management Command: Supported 00:20:00.744 Write Zeroes Command: Supported 00:20:00.744 Set Features Save Field: Not Supported 00:20:00.744 Reservations: Not Supported 00:20:00.744 Timestamp: Not Supported 00:20:00.744 Copy: Supported 00:20:00.744 Volatile Write Cache: Present 00:20:00.744 Atomic Write Unit (Normal): 1 00:20:00.744 Atomic Write Unit (PFail): 1 00:20:00.744 Atomic Compare & Write Unit: 1 00:20:00.744 Fused Compare & Write: Supported 00:20:00.744 Scatter-Gather List 00:20:00.744 SGL Command Set: Supported (Dword aligned) 00:20:00.744 SGL Keyed: Not Supported 00:20:00.744 SGL Bit Bucket Descriptor: Not Supported 00:20:00.744 SGL Metadata Pointer: Not Supported 00:20:00.744 Oversized SGL: Not Supported 00:20:00.744 SGL Metadata Address: Not Supported 00:20:00.744 SGL Offset: Not Supported 00:20:00.744 Transport SGL Data Block: Not Supported 00:20:00.744 Replay Protected Memory Block: Not Supported 00:20:00.744 00:20:00.744 Firmware Slot Information 00:20:00.744 ========================= 00:20:00.744 Active slot: 1 00:20:00.744 Slot 1 Firmware Revision: 25.01 00:20:00.744 00:20:00.744 00:20:00.744 Commands Supported and Effects 00:20:00.744 ============================== 00:20:00.744 Admin Commands 00:20:00.744 -------------- 00:20:00.744 Get Log Page (02h): Supported 00:20:00.744 Identify (06h): Supported 00:20:00.744 Abort (08h): Supported 00:20:00.744 Set Features (09h): Supported 00:20:00.744 Get Features (0Ah): Supported 00:20:00.744 Asynchronous Event Request (0Ch): Supported 00:20:00.744 Keep Alive (18h): Supported 00:20:00.744 I/O Commands 00:20:00.744 ------------ 00:20:00.744 Flush (00h): Supported LBA-Change 00:20:00.744 Write (01h): Supported LBA-Change 00:20:00.744 Read (02h): Supported 00:20:00.744 Compare (05h): Supported 00:20:00.744 Write Zeroes (08h): Supported LBA-Change 00:20:00.744 Dataset Management (09h): Supported LBA-Change 00:20:00.744 Copy (19h): Supported LBA-Change 00:20:00.744 00:20:00.744 Error Log 00:20:00.744 ========= 00:20:00.744 00:20:00.744 Arbitration 00:20:00.744 =========== 00:20:00.744 Arbitration Burst: 1 00:20:00.744 00:20:00.744 Power Management 00:20:00.744 ================ 00:20:00.744 Number of Power States: 1 00:20:00.744 Current Power State: Power State #0 00:20:00.744 Power State #0: 00:20:00.744 Max Power: 0.00 W 00:20:00.744 Non-Operational State: Operational 00:20:00.744 Entry Latency: Not Reported 00:20:00.744 Exit Latency: Not Reported 00:20:00.744 Relative Read Throughput: 0 00:20:00.744 Relative Read Latency: 0 00:20:00.744 Relative Write Throughput: 0 00:20:00.744 Relative Write Latency: 0 00:20:00.744 Idle Power: Not Reported 00:20:00.744 Active Power: Not Reported 00:20:00.744 Non-Operational Permissive Mode: Not Supported 00:20:00.744 00:20:00.744 Health Information 00:20:00.744 ================== 00:20:00.744 Critical Warnings: 00:20:00.744 Available Spare Space: OK 00:20:00.745 Temperature: OK 00:20:00.745 Device Reliability: OK 00:20:00.745 Read Only: No 00:20:00.745 Volatile Memory Backup: OK 00:20:00.745 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:00.745 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:00.745 Available Spare: 0% 00:20:00.745 Available Sp[2024-11-25 14:18:05.736305] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:20:00.745 [2024-11-25 14:18:05.736314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:20:00.745 [2024-11-25 14:18:05.736334] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:20:00.745 [2024-11-25 14:18:05.736342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.745 [2024-11-25 14:18:05.736346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.745 [2024-11-25 14:18:05.736352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.745 [2024-11-25 14:18:05.736356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.745 [2024-11-25 14:18:05.738165] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:20:00.745 [2024-11-25 14:18:05.738173] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:20:00.745 [2024-11-25 14:18:05.738634] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:00.745 [2024-11-25 14:18:05.738673] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:20:00.745 [2024-11-25 14:18:05.738678] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:20:00.745 [2024-11-25 14:18:05.739639] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:20:00.745 [2024-11-25 14:18:05.739647] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:20:00.745 [2024-11-25 14:18:05.739698] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:20:00.745 [2024-11-25 14:18:05.740658] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:20:00.745 are Threshold: 0% 00:20:00.745 Life Percentage Used: 0% 00:20:00.745 Data Units Read: 0 00:20:00.745 Data Units Written: 0 00:20:00.745 Host Read Commands: 0 00:20:00.745 Host Write Commands: 0 00:20:00.745 Controller Busy Time: 0 minutes 00:20:00.745 Power Cycles: 0 00:20:00.745 Power On Hours: 0 hours 00:20:00.745 Unsafe Shutdowns: 0 00:20:00.745 Unrecoverable Media Errors: 0 00:20:00.745 Lifetime Error Log Entries: 0 00:20:00.745 Warning Temperature Time: 0 minutes 00:20:00.745 Critical Temperature Time: 0 minutes 00:20:00.745 00:20:00.745 Number of Queues 00:20:00.745 ================ 00:20:00.745 Number of I/O Submission Queues: 127 00:20:00.745 Number of I/O Completion Queues: 127 00:20:00.745 00:20:00.745 Active Namespaces 00:20:00.745 ================= 00:20:00.745 Namespace ID:1 00:20:00.745 Error Recovery Timeout: Unlimited 00:20:00.745 Command Set Identifier: NVM (00h) 00:20:00.745 Deallocate: Supported 00:20:00.745 Deallocated/Unwritten Error: Not Supported 00:20:00.745 Deallocated Read Value: Unknown 00:20:00.745 Deallocate in Write Zeroes: Not Supported 00:20:00.745 Deallocated Guard Field: 0xFFFF 00:20:00.745 Flush: Supported 00:20:00.745 Reservation: Supported 00:20:00.745 Namespace Sharing Capabilities: Multiple Controllers 00:20:00.745 Size (in LBAs): 131072 (0GiB) 00:20:00.745 Capacity (in LBAs): 131072 (0GiB) 00:20:00.745 Utilization (in LBAs): 131072 (0GiB) 00:20:00.745 NGUID: 6C1C264050184753B7E86DEF2B1BCA65 00:20:00.745 UUID: 6c1c2640-5018-4753-b7e8-6def2b1bca65 00:20:00.745 Thin Provisioning: Not Supported 00:20:00.745 Per-NS Atomic Units: Yes 00:20:00.745 Atomic Boundary Size (Normal): 0 00:20:00.745 Atomic Boundary Size (PFail): 0 00:20:00.745 Atomic Boundary Offset: 0 00:20:00.745 Maximum Single Source Range Length: 65535 00:20:00.745 Maximum Copy Length: 65535 00:20:00.745 Maximum Source Range Count: 1 00:20:00.745 NGUID/EUI64 Never Reused: No 00:20:00.745 Namespace Write Protected: No 00:20:00.745 Number of LBA Formats: 1 00:20:00.745 Current LBA Format: LBA Format #00 00:20:00.745 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:00.745 00:20:00.745 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:20:01.005 [2024-11-25 14:18:05.929850] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:06.282 Initializing NVMe Controllers 00:20:06.282 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:06.282 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:20:06.282 Initialization complete. Launching workers. 00:20:06.282 ======================================================== 00:20:06.282 Latency(us) 00:20:06.282 Device Information : IOPS MiB/s Average min max 00:20:06.282 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39884.97 155.80 3209.10 844.45 10773.02 00:20:06.282 ======================================================== 00:20:06.282 Total : 39884.97 155.80 3209.10 844.45 10773.02 00:20:06.282 00:20:06.282 [2024-11-25 14:18:10.952948] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:06.282 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:20:06.282 [2024-11-25 14:18:11.142816] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:11.573 Initializing NVMe Controllers 00:20:11.573 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:11.573 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:20:11.573 Initialization complete. Launching workers. 00:20:11.573 ======================================================== 00:20:11.573 Latency(us) 00:20:11.573 Device Information : IOPS MiB/s Average min max 00:20:11.573 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16076.80 62.80 7972.75 4986.91 9977.42 00:20:11.573 ======================================================== 00:20:11.573 Total : 16076.80 62.80 7972.75 4986.91 9977.42 00:20:11.573 00:20:11.573 [2024-11-25 14:18:16.180396] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:11.573 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:20:11.573 [2024-11-25 14:18:16.384254] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:16.863 [2024-11-25 14:18:21.453356] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:16.863 Initializing NVMe Controllers 00:20:16.863 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:16.863 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:16.863 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:20:16.863 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:20:16.863 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:20:16.863 Initialization complete. Launching workers. 00:20:16.863 Starting thread on core 2 00:20:16.863 Starting thread on core 3 00:20:16.863 Starting thread on core 1 00:20:16.863 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:20:16.863 [2024-11-25 14:18:21.701522] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:20.165 [2024-11-25 14:18:24.765745] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:20.165 Initializing NVMe Controllers 00:20:20.165 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:20:20.165 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:20:20.165 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:20:20.165 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:20:20.165 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:20:20.165 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:20:20.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:20:20.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:20:20.165 Initialization complete. Launching workers. 00:20:20.165 Starting thread on core 1 with urgent priority queue 00:20:20.165 Starting thread on core 2 with urgent priority queue 00:20:20.165 Starting thread on core 3 with urgent priority queue 00:20:20.165 Starting thread on core 0 with urgent priority queue 00:20:20.165 SPDK bdev Controller (SPDK1 ) core 0: 7878.00 IO/s 12.69 secs/100000 ios 00:20:20.165 SPDK bdev Controller (SPDK1 ) core 1: 13626.33 IO/s 7.34 secs/100000 ios 00:20:20.165 SPDK bdev Controller (SPDK1 ) core 2: 8522.00 IO/s 11.73 secs/100000 ios 00:20:20.165 SPDK bdev Controller (SPDK1 ) core 3: 13521.33 IO/s 7.40 secs/100000 ios 00:20:20.165 ======================================================== 00:20:20.165 00:20:20.165 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:20:20.165 [2024-11-25 14:18:25.005344] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:20.165 Initializing NVMe Controllers 00:20:20.165 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:20:20.165 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:20:20.165 Namespace ID: 1 size: 0GB 00:20:20.165 Initialization complete. 00:20:20.165 INFO: using host memory buffer for IO 00:20:20.165 Hello world! 00:20:20.165 [2024-11-25 14:18:25.039544] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:20.165 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:20:20.425 [2024-11-25 14:18:25.275611] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:21.366 Initializing NVMe Controllers 00:20:21.366 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:20:21.366 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:20:21.366 Initialization complete. Launching workers. 00:20:21.366 submit (in ns) avg, min, max = 4717.7, 2818.3, 3998228.3 00:20:21.366 complete (in ns) avg, min, max = 18376.4, 1634.2, 3998560.0 00:20:21.366 00:20:21.366 Submit histogram 00:20:21.366 ================ 00:20:21.366 Range in us Cumulative Count 00:20:21.366 2.813 - 2.827: 0.3362% ( 69) 00:20:21.366 2.827 - 2.840: 1.4619% ( 231) 00:20:21.366 2.840 - 2.853: 4.4881% ( 621) 00:20:21.366 2.853 - 2.867: 9.4586% ( 1020) 00:20:21.366 2.867 - 2.880: 14.3122% ( 996) 00:20:21.366 2.880 - 2.893: 19.6530% ( 1096) 00:20:21.366 2.893 - 2.907: 25.6128% ( 1223) 00:20:21.366 2.907 - 2.920: 30.8270% ( 1070) 00:20:21.366 2.920 - 2.933: 35.3297% ( 924) 00:20:21.366 2.933 - 2.947: 41.2309% ( 1211) 00:20:21.366 2.947 - 2.960: 47.1176% ( 1208) 00:20:21.366 2.960 - 2.973: 54.0227% ( 1417) 00:20:21.366 2.973 - 2.987: 61.9902% ( 1635) 00:20:21.366 2.987 - 3.000: 70.5278% ( 1752) 00:20:21.366 3.000 - 3.013: 78.9143% ( 1721) 00:20:21.366 3.013 - 3.027: 85.7902% ( 1411) 00:20:21.366 3.027 - 3.040: 91.4868% ( 1169) 00:20:21.366 3.040 - 3.053: 95.3511% ( 793) 00:20:21.366 3.053 - 3.067: 97.5440% ( 450) 00:20:21.366 3.067 - 3.080: 98.5576% ( 208) 00:20:21.366 3.080 - 3.093: 99.0156% ( 94) 00:20:21.366 3.093 - 3.107: 99.3226% ( 63) 00:20:21.366 3.107 - 3.120: 99.4981% ( 36) 00:20:21.366 3.120 - 3.133: 99.5614% ( 13) 00:20:21.366 3.133 - 3.147: 99.6150% ( 11) 00:20:21.366 3.147 - 3.160: 99.6248% ( 2) 00:20:21.366 3.267 - 3.280: 99.6296% ( 1) 00:20:21.366 3.467 - 3.493: 99.6345% ( 1) 00:20:21.366 3.520 - 3.547: 99.6394% ( 1) 00:20:21.366 3.547 - 3.573: 99.6443% ( 1) 00:20:21.366 3.600 - 3.627: 99.6491% ( 1) 00:20:21.366 3.627 - 3.653: 99.6540% ( 1) 00:20:21.366 3.680 - 3.707: 99.6638% ( 2) 00:20:21.366 3.867 - 3.893: 99.6686% ( 1) 00:20:21.366 3.920 - 3.947: 99.6735% ( 1) 00:20:21.366 3.947 - 3.973: 99.6784% ( 1) 00:20:21.366 4.160 - 4.187: 99.6833% ( 1) 00:20:21.366 4.293 - 4.320: 99.6881% ( 1) 00:20:21.366 4.347 - 4.373: 99.6930% ( 1) 00:20:21.366 4.507 - 4.533: 99.6979% ( 1) 00:20:21.366 4.533 - 4.560: 99.7076% ( 2) 00:20:21.366 4.560 - 4.587: 99.7125% ( 1) 00:20:21.366 4.667 - 4.693: 99.7222% ( 2) 00:20:21.366 4.773 - 4.800: 99.7271% ( 1) 00:20:21.366 4.880 - 4.907: 99.7320% ( 1) 00:20:21.366 4.933 - 4.960: 99.7369% ( 1) 00:20:21.366 4.960 - 4.987: 99.7417% ( 1) 00:20:21.366 5.013 - 5.040: 99.7466% ( 1) 00:20:21.366 5.040 - 5.067: 99.7515% ( 1) 00:20:21.366 5.093 - 5.120: 99.7563% ( 1) 00:20:21.366 5.120 - 5.147: 99.7661% ( 2) 00:20:21.366 5.173 - 5.200: 99.7710% ( 1) 00:20:21.366 5.200 - 5.227: 99.7856% ( 3) 00:20:21.366 5.307 - 5.333: 99.8002% ( 3) 00:20:21.366 5.440 - 5.467: 99.8051% ( 1) 00:20:21.366 5.467 - 5.493: 99.8148% ( 2) 00:20:21.366 5.547 - 5.573: 99.8246% ( 2) 00:20:21.366 5.573 - 5.600: 99.8294% ( 1) 00:20:21.366 5.627 - 5.653: 99.8343% ( 1) 00:20:21.366 5.680 - 5.707: 99.8392% ( 1) 00:20:21.366 5.813 - 5.840: 99.8489% ( 2) 00:20:21.366 5.840 - 5.867: 99.8538% ( 1) 00:20:21.366 6.027 - 6.053: 99.8684% ( 3) 00:20:21.366 6.107 - 6.133: 99.8733% ( 1) 00:20:21.366 6.133 - 6.160: 99.8782% ( 1) 00:20:21.366 6.213 - 6.240: 99.8928% ( 3) 00:20:21.366 6.240 - 6.267: 99.8977% ( 1) 00:20:21.366 6.320 - 6.347: 99.9025% ( 1) 00:20:21.366 6.373 - 6.400: 99.9074% ( 1) 00:20:21.366 6.427 - 6.453: 99.9123% ( 1) 00:20:21.366 6.453 - 6.480: 99.9172% ( 1) 00:20:21.366 6.533 - 6.560: 99.9220% ( 1) 00:20:21.366 6.827 - 6.880: 99.9318% ( 2) 00:20:21.366 6.880 - 6.933: 99.9367% ( 1) 00:20:21.366 6.933 - 6.987: 99.9415% ( 1) 00:20:21.366 11.680 - 11.733: 99.9464% ( 1) 00:20:21.366 11.840 - 11.893: 99.9513% ( 1) 00:20:21.366 12.000 - 12.053: 99.9561% ( 1) 00:20:21.366 3986.773 - 4014.080: 100.0000% ( 9) 00:20:21.366 00:20:21.366 [2024-11-25 14:18:26.294259] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:21.366 Complete histogram 00:20:21.366 ================== 00:20:21.366 Range in us Cumulative Count 00:20:21.366 1.633 - 1.640: 0.0097% ( 2) 00:20:21.366 1.640 - 1.647: 0.1998% ( 39) 00:20:21.366 1.647 - 1.653: 1.1159% ( 188) 00:20:21.366 1.653 - 1.660: 1.1988% ( 17) 00:20:21.366 1.660 - 1.667: 1.2962% ( 20) 00:20:21.366 1.667 - 1.673: 1.4522% ( 32) 00:20:21.366 1.673 - 1.680: 1.5106% ( 12) 00:20:21.366 1.680 - 1.687: 1.5448% ( 7) 00:20:21.366 1.687 - 1.693: 1.5691% ( 5) 00:20:21.366 1.693 - 1.700: 1.6958% ( 26) 00:20:21.366 1.700 - 1.707: 21.5340% ( 4071) 00:20:21.366 1.707 - 1.720: 53.2333% ( 6505) 00:20:21.366 1.720 - 1.733: 72.3015% ( 3913) 00:20:21.366 1.733 - 1.747: 81.6773% ( 1924) 00:20:21.366 1.747 - 1.760: 83.6070% ( 396) 00:20:21.366 1.760 - 1.773: 87.1644% ( 730) 00:20:21.366 1.773 - 1.787: 92.6222% ( 1120) 00:20:21.366 1.787 - 1.800: 96.4865% ( 793) 00:20:21.366 1.800 - 1.813: 98.5868% ( 431) 00:20:21.366 1.813 - 1.827: 99.2788% ( 142) 00:20:21.366 1.827 - 1.840: 99.4396% ( 33) 00:20:21.366 1.840 - 1.853: 99.4542% ( 3) 00:20:21.366 3.520 - 3.547: 99.4591% ( 1) 00:20:21.366 3.653 - 3.680: 99.4640% ( 1) 00:20:21.366 3.867 - 3.893: 99.4688% ( 1) 00:20:21.366 3.893 - 3.920: 99.4737% ( 1) 00:20:21.366 4.133 - 4.160: 99.4786% ( 1) 00:20:21.366 4.213 - 4.240: 99.4835% ( 1) 00:20:21.366 4.267 - 4.293: 99.4883% ( 1) 00:20:21.366 4.293 - 4.320: 99.4932% ( 1) 00:20:21.366 4.453 - 4.480: 99.4981% ( 1) 00:20:21.366 4.480 - 4.507: 99.5029% ( 1) 00:20:21.366 4.533 - 4.560: 99.5078% ( 1) 00:20:21.366 4.587 - 4.613: 99.5127% ( 1) 00:20:21.366 4.747 - 4.773: 99.5273% ( 3) 00:20:21.366 4.853 - 4.880: 99.5322% ( 1) 00:20:21.366 5.013 - 5.040: 99.5371% ( 1) 00:20:21.366 5.147 - 5.173: 99.5419% ( 1) 00:20:21.366 5.173 - 5.200: 99.5468% ( 1) 00:20:21.366 5.227 - 5.253: 99.5517% ( 1) 00:20:21.366 5.600 - 5.627: 99.5566% ( 1) 00:20:21.366 5.760 - 5.787: 99.5614% ( 1) 00:20:21.366 10.400 - 10.453: 99.5663% ( 1) 00:20:21.366 10.667 - 10.720: 99.5712% ( 1) 00:20:21.367 11.307 - 11.360: 99.5760% ( 1) 00:20:21.367 11.467 - 11.520: 99.5809% ( 1) 00:20:21.367 2048.000 - 2061.653: 99.5858% ( 1) 00:20:21.367 3986.773 - 4014.080: 100.0000% ( 85) 00:20:21.367 00:20:21.367 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:20:21.367 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:20:21.367 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:20:21.367 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:20:21.367 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:21.627 [ 00:20:21.627 { 00:20:21.627 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:21.627 "subtype": "Discovery", 00:20:21.627 "listen_addresses": [], 00:20:21.627 "allow_any_host": true, 00:20:21.627 "hosts": [] 00:20:21.627 }, 00:20:21.627 { 00:20:21.627 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:21.627 "subtype": "NVMe", 00:20:21.627 "listen_addresses": [ 00:20:21.627 { 00:20:21.627 "trtype": "VFIOUSER", 00:20:21.627 "adrfam": "IPv4", 00:20:21.627 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:21.627 "trsvcid": "0" 00:20:21.627 } 00:20:21.627 ], 00:20:21.627 "allow_any_host": true, 00:20:21.627 "hosts": [], 00:20:21.627 "serial_number": "SPDK1", 00:20:21.627 "model_number": "SPDK bdev Controller", 00:20:21.627 "max_namespaces": 32, 00:20:21.627 "min_cntlid": 1, 00:20:21.627 "max_cntlid": 65519, 00:20:21.627 "namespaces": [ 00:20:21.627 { 00:20:21.627 "nsid": 1, 00:20:21.627 "bdev_name": "Malloc1", 00:20:21.627 "name": "Malloc1", 00:20:21.627 "nguid": "6C1C264050184753B7E86DEF2B1BCA65", 00:20:21.627 "uuid": "6c1c2640-5018-4753-b7e8-6def2b1bca65" 00:20:21.627 } 00:20:21.627 ] 00:20:21.627 }, 00:20:21.627 { 00:20:21.627 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:21.627 "subtype": "NVMe", 00:20:21.627 "listen_addresses": [ 00:20:21.627 { 00:20:21.627 "trtype": "VFIOUSER", 00:20:21.627 "adrfam": "IPv4", 00:20:21.627 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:21.627 "trsvcid": "0" 00:20:21.627 } 00:20:21.627 ], 00:20:21.627 "allow_any_host": true, 00:20:21.627 "hosts": [], 00:20:21.627 "serial_number": "SPDK2", 00:20:21.627 "model_number": "SPDK bdev Controller", 00:20:21.627 "max_namespaces": 32, 00:20:21.627 "min_cntlid": 1, 00:20:21.627 "max_cntlid": 65519, 00:20:21.627 "namespaces": [ 00:20:21.627 { 00:20:21.627 "nsid": 1, 00:20:21.627 "bdev_name": "Malloc2", 00:20:21.627 "name": "Malloc2", 00:20:21.627 "nguid": "0C0FC25FC01A4C35922F9B18FC981BEF", 00:20:21.627 "uuid": "0c0fc25f-c01a-4c35-922f-9b18fc981bef" 00:20:21.627 } 00:20:21.627 ] 00:20:21.627 } 00:20:21.627 ] 00:20:21.627 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:21.627 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3389575 00:20:21.627 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:20:21.627 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:20:21.627 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:20:21.627 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:21.627 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:21.627 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:20:21.627 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:20:21.627 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:20:21.627 [2024-11-25 14:18:26.675519] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:21.627 Malloc3 00:20:21.627 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:20:21.887 [2024-11-25 14:18:26.853743] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:21.887 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:21.887 Asynchronous Event Request test 00:20:21.887 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:20:21.887 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:20:21.887 Registering asynchronous event callbacks... 00:20:21.887 Starting namespace attribute notice tests for all controllers... 00:20:21.887 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:21.887 aer_cb - Changed Namespace 00:20:21.887 Cleaning up... 00:20:22.149 [ 00:20:22.149 { 00:20:22.149 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:22.149 "subtype": "Discovery", 00:20:22.149 "listen_addresses": [], 00:20:22.149 "allow_any_host": true, 00:20:22.149 "hosts": [] 00:20:22.149 }, 00:20:22.149 { 00:20:22.149 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:22.149 "subtype": "NVMe", 00:20:22.149 "listen_addresses": [ 00:20:22.149 { 00:20:22.149 "trtype": "VFIOUSER", 00:20:22.149 "adrfam": "IPv4", 00:20:22.149 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:22.149 "trsvcid": "0" 00:20:22.149 } 00:20:22.149 ], 00:20:22.149 "allow_any_host": true, 00:20:22.149 "hosts": [], 00:20:22.149 "serial_number": "SPDK1", 00:20:22.149 "model_number": "SPDK bdev Controller", 00:20:22.149 "max_namespaces": 32, 00:20:22.149 "min_cntlid": 1, 00:20:22.149 "max_cntlid": 65519, 00:20:22.149 "namespaces": [ 00:20:22.149 { 00:20:22.149 "nsid": 1, 00:20:22.149 "bdev_name": "Malloc1", 00:20:22.149 "name": "Malloc1", 00:20:22.149 "nguid": "6C1C264050184753B7E86DEF2B1BCA65", 00:20:22.149 "uuid": "6c1c2640-5018-4753-b7e8-6def2b1bca65" 00:20:22.149 }, 00:20:22.149 { 00:20:22.149 "nsid": 2, 00:20:22.149 "bdev_name": "Malloc3", 00:20:22.149 "name": "Malloc3", 00:20:22.149 "nguid": "106C1FFE25864AC493A3B980CE01FD0A", 00:20:22.149 "uuid": "106c1ffe-2586-4ac4-93a3-b980ce01fd0a" 00:20:22.149 } 00:20:22.149 ] 00:20:22.149 }, 00:20:22.149 { 00:20:22.149 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:22.149 "subtype": "NVMe", 00:20:22.149 "listen_addresses": [ 00:20:22.149 { 00:20:22.149 "trtype": "VFIOUSER", 00:20:22.149 "adrfam": "IPv4", 00:20:22.149 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:22.149 "trsvcid": "0" 00:20:22.149 } 00:20:22.149 ], 00:20:22.149 "allow_any_host": true, 00:20:22.149 "hosts": [], 00:20:22.149 "serial_number": "SPDK2", 00:20:22.149 "model_number": "SPDK bdev Controller", 00:20:22.149 "max_namespaces": 32, 00:20:22.149 "min_cntlid": 1, 00:20:22.149 "max_cntlid": 65519, 00:20:22.149 "namespaces": [ 00:20:22.149 { 00:20:22.149 "nsid": 1, 00:20:22.149 "bdev_name": "Malloc2", 00:20:22.149 "name": "Malloc2", 00:20:22.149 "nguid": "0C0FC25FC01A4C35922F9B18FC981BEF", 00:20:22.149 "uuid": "0c0fc25f-c01a-4c35-922f-9b18fc981bef" 00:20:22.149 } 00:20:22.149 ] 00:20:22.149 } 00:20:22.149 ] 00:20:22.149 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3389575 00:20:22.149 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:22.149 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:20:22.149 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:20:22.149 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:20:22.149 [2024-11-25 14:18:27.074290] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:20:22.149 [2024-11-25 14:18:27.074336] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3389594 ] 00:20:22.149 [2024-11-25 14:18:27.111929] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:20:22.149 [2024-11-25 14:18:27.120376] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:20:22.149 [2024-11-25 14:18:27.120395] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5d29083000 00:20:22.149 [2024-11-25 14:18:27.121373] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:22.150 [2024-11-25 14:18:27.122380] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:22.150 [2024-11-25 14:18:27.123383] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:22.150 [2024-11-25 14:18:27.124389] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:22.150 [2024-11-25 14:18:27.125394] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:22.150 [2024-11-25 14:18:27.126396] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:22.150 [2024-11-25 14:18:27.127406] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:22.150 [2024-11-25 14:18:27.128417] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:22.150 [2024-11-25 14:18:27.129420] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:20:22.150 [2024-11-25 14:18:27.129428] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f5d29078000 00:20:22.150 [2024-11-25 14:18:27.130338] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:20:22.150 [2024-11-25 14:18:27.144434] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:20:22.150 [2024-11-25 14:18:27.144452] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:20:22.150 [2024-11-25 14:18:27.146510] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:20:22.150 [2024-11-25 14:18:27.146544] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:20:22.150 [2024-11-25 14:18:27.146603] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:20:22.150 [2024-11-25 14:18:27.146612] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:20:22.150 [2024-11-25 14:18:27.146617] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:20:22.150 [2024-11-25 14:18:27.147509] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:20:22.150 [2024-11-25 14:18:27.147516] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:20:22.150 [2024-11-25 14:18:27.147523] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:20:22.150 [2024-11-25 14:18:27.148516] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:20:22.150 [2024-11-25 14:18:27.148523] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:20:22.150 [2024-11-25 14:18:27.148528] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:20:22.150 [2024-11-25 14:18:27.149518] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:20:22.150 [2024-11-25 14:18:27.149524] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:22.150 [2024-11-25 14:18:27.150530] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:20:22.150 [2024-11-25 14:18:27.150536] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:20:22.150 [2024-11-25 14:18:27.150540] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:20:22.150 [2024-11-25 14:18:27.150544] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:22.150 [2024-11-25 14:18:27.150650] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:20:22.150 [2024-11-25 14:18:27.150653] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:22.150 [2024-11-25 14:18:27.150657] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:20:22.150 [2024-11-25 14:18:27.151535] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:20:22.150 [2024-11-25 14:18:27.152544] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:20:22.150 [2024-11-25 14:18:27.153553] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:20:22.150 [2024-11-25 14:18:27.154558] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:22.150 [2024-11-25 14:18:27.154588] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:22.150 [2024-11-25 14:18:27.155568] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:20:22.150 [2024-11-25 14:18:27.155575] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:22.150 [2024-11-25 14:18:27.155578] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:20:22.150 [2024-11-25 14:18:27.155593] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:20:22.150 [2024-11-25 14:18:27.155598] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:20:22.150 [2024-11-25 14:18:27.155609] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:22.150 [2024-11-25 14:18:27.155614] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:22.150 [2024-11-25 14:18:27.155616] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:22.150 [2024-11-25 14:18:27.155625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:22.150 [2024-11-25 14:18:27.166164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:20:22.150 [2024-11-25 14:18:27.166173] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:20:22.150 [2024-11-25 14:18:27.166178] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:20:22.150 [2024-11-25 14:18:27.166181] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:20:22.150 [2024-11-25 14:18:27.166184] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:20:22.150 [2024-11-25 14:18:27.166188] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:20:22.150 [2024-11-25 14:18:27.166191] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:20:22.150 [2024-11-25 14:18:27.166194] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:20:22.150 [2024-11-25 14:18:27.166200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:20:22.150 [2024-11-25 14:18:27.166207] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:20:22.150 [2024-11-25 14:18:27.174163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:20:22.150 [2024-11-25 14:18:27.174171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.150 [2024-11-25 14:18:27.174178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.150 [2024-11-25 14:18:27.174184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.150 [2024-11-25 14:18:27.174190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.150 [2024-11-25 14:18:27.174193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:20:22.150 [2024-11-25 14:18:27.174200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:22.150 [2024-11-25 14:18:27.174206] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:20:22.150 [2024-11-25 14:18:27.182163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:20:22.150 [2024-11-25 14:18:27.182169] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:20:22.150 [2024-11-25 14:18:27.182173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:22.150 [2024-11-25 14:18:27.182178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:20:22.150 [2024-11-25 14:18:27.182183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:20:22.150 [2024-11-25 14:18:27.182190] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:22.150 [2024-11-25 14:18:27.190163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:20:22.150 [2024-11-25 14:18:27.190207] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:20:22.150 [2024-11-25 14:18:27.190213] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:20:22.150 [2024-11-25 14:18:27.190219] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:20:22.150 [2024-11-25 14:18:27.190222] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:20:22.150 [2024-11-25 14:18:27.190224] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:22.150 [2024-11-25 14:18:27.190229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:20:22.150 [2024-11-25 14:18:27.198163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:20:22.150 [2024-11-25 14:18:27.198170] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:20:22.151 [2024-11-25 14:18:27.198183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:20:22.151 [2024-11-25 14:18:27.198188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:20:22.151 [2024-11-25 14:18:27.198193] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:22.151 [2024-11-25 14:18:27.198196] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:22.151 [2024-11-25 14:18:27.198199] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:22.151 [2024-11-25 14:18:27.198203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:22.151 [2024-11-25 14:18:27.206164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:20:22.151 [2024-11-25 14:18:27.206174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:22.151 [2024-11-25 14:18:27.206180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:22.151 [2024-11-25 14:18:27.206185] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:22.151 [2024-11-25 14:18:27.206188] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:22.151 [2024-11-25 14:18:27.206190] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:22.151 [2024-11-25 14:18:27.206195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:22.151 [2024-11-25 14:18:27.214162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:20:22.151 [2024-11-25 14:18:27.214169] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:22.151 [2024-11-25 14:18:27.214173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:20:22.151 [2024-11-25 14:18:27.214182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:20:22.151 [2024-11-25 14:18:27.214186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:20:22.151 [2024-11-25 14:18:27.214190] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:22.151 [2024-11-25 14:18:27.214193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:20:22.151 [2024-11-25 14:18:27.214197] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:20:22.151 [2024-11-25 14:18:27.214200] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:20:22.151 [2024-11-25 14:18:27.214203] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:20:22.151 [2024-11-25 14:18:27.214217] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:20:22.151 [2024-11-25 14:18:27.222162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:20:22.151 [2024-11-25 14:18:27.222172] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:20:22.151 [2024-11-25 14:18:27.230163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:20:22.151 [2024-11-25 14:18:27.230173] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:20:22.412 [2024-11-25 14:18:27.238163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:20:22.412 [2024-11-25 14:18:27.238174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:22.412 [2024-11-25 14:18:27.246163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:20:22.412 [2024-11-25 14:18:27.246174] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:20:22.412 [2024-11-25 14:18:27.246178] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:20:22.412 [2024-11-25 14:18:27.246180] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:20:22.412 [2024-11-25 14:18:27.246183] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:20:22.412 [2024-11-25 14:18:27.246185] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:20:22.412 [2024-11-25 14:18:27.246190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:20:22.412 [2024-11-25 14:18:27.246195] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:20:22.412 [2024-11-25 14:18:27.246198] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:20:22.412 [2024-11-25 14:18:27.246201] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:22.412 [2024-11-25 14:18:27.246205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:20:22.412 [2024-11-25 14:18:27.246210] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:20:22.412 [2024-11-25 14:18:27.246215] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:22.412 [2024-11-25 14:18:27.246217] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:22.412 [2024-11-25 14:18:27.246221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:22.412 [2024-11-25 14:18:27.246227] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:20:22.412 [2024-11-25 14:18:27.246230] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:20:22.412 [2024-11-25 14:18:27.246232] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:22.412 [2024-11-25 14:18:27.246236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:20:22.412 [2024-11-25 14:18:27.254162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:20:22.412 [2024-11-25 14:18:27.254172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:20:22.412 [2024-11-25 14:18:27.254180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:20:22.412 [2024-11-25 14:18:27.254185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:20:22.412 ===================================================== 00:20:22.412 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:22.412 ===================================================== 00:20:22.412 Controller Capabilities/Features 00:20:22.412 ================================ 00:20:22.412 Vendor ID: 4e58 00:20:22.412 Subsystem Vendor ID: 4e58 00:20:22.412 Serial Number: SPDK2 00:20:22.412 Model Number: SPDK bdev Controller 00:20:22.412 Firmware Version: 25.01 00:20:22.412 Recommended Arb Burst: 6 00:20:22.412 IEEE OUI Identifier: 8d 6b 50 00:20:22.412 Multi-path I/O 00:20:22.412 May have multiple subsystem ports: Yes 00:20:22.412 May have multiple controllers: Yes 00:20:22.412 Associated with SR-IOV VF: No 00:20:22.412 Max Data Transfer Size: 131072 00:20:22.412 Max Number of Namespaces: 32 00:20:22.412 Max Number of I/O Queues: 127 00:20:22.412 NVMe Specification Version (VS): 1.3 00:20:22.412 NVMe Specification Version (Identify): 1.3 00:20:22.412 Maximum Queue Entries: 256 00:20:22.412 Contiguous Queues Required: Yes 00:20:22.412 Arbitration Mechanisms Supported 00:20:22.412 Weighted Round Robin: Not Supported 00:20:22.412 Vendor Specific: Not Supported 00:20:22.412 Reset Timeout: 15000 ms 00:20:22.412 Doorbell Stride: 4 bytes 00:20:22.412 NVM Subsystem Reset: Not Supported 00:20:22.412 Command Sets Supported 00:20:22.412 NVM Command Set: Supported 00:20:22.412 Boot Partition: Not Supported 00:20:22.412 Memory Page Size Minimum: 4096 bytes 00:20:22.412 Memory Page Size Maximum: 4096 bytes 00:20:22.412 Persistent Memory Region: Not Supported 00:20:22.412 Optional Asynchronous Events Supported 00:20:22.412 Namespace Attribute Notices: Supported 00:20:22.412 Firmware Activation Notices: Not Supported 00:20:22.412 ANA Change Notices: Not Supported 00:20:22.412 PLE Aggregate Log Change Notices: Not Supported 00:20:22.412 LBA Status Info Alert Notices: Not Supported 00:20:22.412 EGE Aggregate Log Change Notices: Not Supported 00:20:22.412 Normal NVM Subsystem Shutdown event: Not Supported 00:20:22.412 Zone Descriptor Change Notices: Not Supported 00:20:22.412 Discovery Log Change Notices: Not Supported 00:20:22.412 Controller Attributes 00:20:22.412 128-bit Host Identifier: Supported 00:20:22.412 Non-Operational Permissive Mode: Not Supported 00:20:22.412 NVM Sets: Not Supported 00:20:22.412 Read Recovery Levels: Not Supported 00:20:22.412 Endurance Groups: Not Supported 00:20:22.412 Predictable Latency Mode: Not Supported 00:20:22.412 Traffic Based Keep ALive: Not Supported 00:20:22.412 Namespace Granularity: Not Supported 00:20:22.412 SQ Associations: Not Supported 00:20:22.412 UUID List: Not Supported 00:20:22.412 Multi-Domain Subsystem: Not Supported 00:20:22.412 Fixed Capacity Management: Not Supported 00:20:22.412 Variable Capacity Management: Not Supported 00:20:22.412 Delete Endurance Group: Not Supported 00:20:22.412 Delete NVM Set: Not Supported 00:20:22.412 Extended LBA Formats Supported: Not Supported 00:20:22.412 Flexible Data Placement Supported: Not Supported 00:20:22.412 00:20:22.412 Controller Memory Buffer Support 00:20:22.412 ================================ 00:20:22.412 Supported: No 00:20:22.412 00:20:22.412 Persistent Memory Region Support 00:20:22.412 ================================ 00:20:22.412 Supported: No 00:20:22.412 00:20:22.412 Admin Command Set Attributes 00:20:22.412 ============================ 00:20:22.412 Security Send/Receive: Not Supported 00:20:22.412 Format NVM: Not Supported 00:20:22.412 Firmware Activate/Download: Not Supported 00:20:22.412 Namespace Management: Not Supported 00:20:22.412 Device Self-Test: Not Supported 00:20:22.412 Directives: Not Supported 00:20:22.412 NVMe-MI: Not Supported 00:20:22.412 Virtualization Management: Not Supported 00:20:22.412 Doorbell Buffer Config: Not Supported 00:20:22.412 Get LBA Status Capability: Not Supported 00:20:22.412 Command & Feature Lockdown Capability: Not Supported 00:20:22.412 Abort Command Limit: 4 00:20:22.412 Async Event Request Limit: 4 00:20:22.412 Number of Firmware Slots: N/A 00:20:22.412 Firmware Slot 1 Read-Only: N/A 00:20:22.412 Firmware Activation Without Reset: N/A 00:20:22.412 Multiple Update Detection Support: N/A 00:20:22.412 Firmware Update Granularity: No Information Provided 00:20:22.412 Per-Namespace SMART Log: No 00:20:22.412 Asymmetric Namespace Access Log Page: Not Supported 00:20:22.412 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:20:22.412 Command Effects Log Page: Supported 00:20:22.412 Get Log Page Extended Data: Supported 00:20:22.412 Telemetry Log Pages: Not Supported 00:20:22.412 Persistent Event Log Pages: Not Supported 00:20:22.412 Supported Log Pages Log Page: May Support 00:20:22.412 Commands Supported & Effects Log Page: Not Supported 00:20:22.412 Feature Identifiers & Effects Log Page:May Support 00:20:22.412 NVMe-MI Commands & Effects Log Page: May Support 00:20:22.412 Data Area 4 for Telemetry Log: Not Supported 00:20:22.412 Error Log Page Entries Supported: 128 00:20:22.412 Keep Alive: Supported 00:20:22.412 Keep Alive Granularity: 10000 ms 00:20:22.412 00:20:22.413 NVM Command Set Attributes 00:20:22.413 ========================== 00:20:22.413 Submission Queue Entry Size 00:20:22.413 Max: 64 00:20:22.413 Min: 64 00:20:22.413 Completion Queue Entry Size 00:20:22.413 Max: 16 00:20:22.413 Min: 16 00:20:22.413 Number of Namespaces: 32 00:20:22.413 Compare Command: Supported 00:20:22.413 Write Uncorrectable Command: Not Supported 00:20:22.413 Dataset Management Command: Supported 00:20:22.413 Write Zeroes Command: Supported 00:20:22.413 Set Features Save Field: Not Supported 00:20:22.413 Reservations: Not Supported 00:20:22.413 Timestamp: Not Supported 00:20:22.413 Copy: Supported 00:20:22.413 Volatile Write Cache: Present 00:20:22.413 Atomic Write Unit (Normal): 1 00:20:22.413 Atomic Write Unit (PFail): 1 00:20:22.413 Atomic Compare & Write Unit: 1 00:20:22.413 Fused Compare & Write: Supported 00:20:22.413 Scatter-Gather List 00:20:22.413 SGL Command Set: Supported (Dword aligned) 00:20:22.413 SGL Keyed: Not Supported 00:20:22.413 SGL Bit Bucket Descriptor: Not Supported 00:20:22.413 SGL Metadata Pointer: Not Supported 00:20:22.413 Oversized SGL: Not Supported 00:20:22.413 SGL Metadata Address: Not Supported 00:20:22.413 SGL Offset: Not Supported 00:20:22.413 Transport SGL Data Block: Not Supported 00:20:22.413 Replay Protected Memory Block: Not Supported 00:20:22.413 00:20:22.413 Firmware Slot Information 00:20:22.413 ========================= 00:20:22.413 Active slot: 1 00:20:22.413 Slot 1 Firmware Revision: 25.01 00:20:22.413 00:20:22.413 00:20:22.413 Commands Supported and Effects 00:20:22.413 ============================== 00:20:22.413 Admin Commands 00:20:22.413 -------------- 00:20:22.413 Get Log Page (02h): Supported 00:20:22.413 Identify (06h): Supported 00:20:22.413 Abort (08h): Supported 00:20:22.413 Set Features (09h): Supported 00:20:22.413 Get Features (0Ah): Supported 00:20:22.413 Asynchronous Event Request (0Ch): Supported 00:20:22.413 Keep Alive (18h): Supported 00:20:22.413 I/O Commands 00:20:22.413 ------------ 00:20:22.413 Flush (00h): Supported LBA-Change 00:20:22.413 Write (01h): Supported LBA-Change 00:20:22.413 Read (02h): Supported 00:20:22.413 Compare (05h): Supported 00:20:22.413 Write Zeroes (08h): Supported LBA-Change 00:20:22.413 Dataset Management (09h): Supported LBA-Change 00:20:22.413 Copy (19h): Supported LBA-Change 00:20:22.413 00:20:22.413 Error Log 00:20:22.413 ========= 00:20:22.413 00:20:22.413 Arbitration 00:20:22.413 =========== 00:20:22.413 Arbitration Burst: 1 00:20:22.413 00:20:22.413 Power Management 00:20:22.413 ================ 00:20:22.413 Number of Power States: 1 00:20:22.413 Current Power State: Power State #0 00:20:22.413 Power State #0: 00:20:22.413 Max Power: 0.00 W 00:20:22.413 Non-Operational State: Operational 00:20:22.413 Entry Latency: Not Reported 00:20:22.413 Exit Latency: Not Reported 00:20:22.413 Relative Read Throughput: 0 00:20:22.413 Relative Read Latency: 0 00:20:22.413 Relative Write Throughput: 0 00:20:22.413 Relative Write Latency: 0 00:20:22.413 Idle Power: Not Reported 00:20:22.413 Active Power: Not Reported 00:20:22.413 Non-Operational Permissive Mode: Not Supported 00:20:22.413 00:20:22.413 Health Information 00:20:22.413 ================== 00:20:22.413 Critical Warnings: 00:20:22.413 Available Spare Space: OK 00:20:22.413 Temperature: OK 00:20:22.413 Device Reliability: OK 00:20:22.413 Read Only: No 00:20:22.413 Volatile Memory Backup: OK 00:20:22.413 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:22.413 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:22.413 Available Spare: 0% 00:20:22.413 Available Sp[2024-11-25 14:18:27.254258] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:20:22.413 [2024-11-25 14:18:27.262163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:20:22.413 [2024-11-25 14:18:27.262187] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:20:22.413 [2024-11-25 14:18:27.262194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.413 [2024-11-25 14:18:27.262199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.413 [2024-11-25 14:18:27.262203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.413 [2024-11-25 14:18:27.262207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.413 [2024-11-25 14:18:27.262249] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:20:22.413 [2024-11-25 14:18:27.262257] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:20:22.413 [2024-11-25 14:18:27.263248] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:22.413 [2024-11-25 14:18:27.263283] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:20:22.413 [2024-11-25 14:18:27.263288] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:20:22.413 [2024-11-25 14:18:27.264249] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:20:22.413 [2024-11-25 14:18:27.264258] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:20:22.413 [2024-11-25 14:18:27.264298] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:20:22.413 [2024-11-25 14:18:27.265273] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:20:22.413 are Threshold: 0% 00:20:22.413 Life Percentage Used: 0% 00:20:22.413 Data Units Read: 0 00:20:22.413 Data Units Written: 0 00:20:22.413 Host Read Commands: 0 00:20:22.413 Host Write Commands: 0 00:20:22.413 Controller Busy Time: 0 minutes 00:20:22.413 Power Cycles: 0 00:20:22.413 Power On Hours: 0 hours 00:20:22.413 Unsafe Shutdowns: 0 00:20:22.413 Unrecoverable Media Errors: 0 00:20:22.413 Lifetime Error Log Entries: 0 00:20:22.413 Warning Temperature Time: 0 minutes 00:20:22.413 Critical Temperature Time: 0 minutes 00:20:22.413 00:20:22.413 Number of Queues 00:20:22.413 ================ 00:20:22.413 Number of I/O Submission Queues: 127 00:20:22.413 Number of I/O Completion Queues: 127 00:20:22.413 00:20:22.413 Active Namespaces 00:20:22.413 ================= 00:20:22.413 Namespace ID:1 00:20:22.413 Error Recovery Timeout: Unlimited 00:20:22.413 Command Set Identifier: NVM (00h) 00:20:22.413 Deallocate: Supported 00:20:22.413 Deallocated/Unwritten Error: Not Supported 00:20:22.413 Deallocated Read Value: Unknown 00:20:22.413 Deallocate in Write Zeroes: Not Supported 00:20:22.413 Deallocated Guard Field: 0xFFFF 00:20:22.413 Flush: Supported 00:20:22.413 Reservation: Supported 00:20:22.413 Namespace Sharing Capabilities: Multiple Controllers 00:20:22.413 Size (in LBAs): 131072 (0GiB) 00:20:22.413 Capacity (in LBAs): 131072 (0GiB) 00:20:22.413 Utilization (in LBAs): 131072 (0GiB) 00:20:22.413 NGUID: 0C0FC25FC01A4C35922F9B18FC981BEF 00:20:22.413 UUID: 0c0fc25f-c01a-4c35-922f-9b18fc981bef 00:20:22.413 Thin Provisioning: Not Supported 00:20:22.413 Per-NS Atomic Units: Yes 00:20:22.413 Atomic Boundary Size (Normal): 0 00:20:22.413 Atomic Boundary Size (PFail): 0 00:20:22.413 Atomic Boundary Offset: 0 00:20:22.413 Maximum Single Source Range Length: 65535 00:20:22.413 Maximum Copy Length: 65535 00:20:22.413 Maximum Source Range Count: 1 00:20:22.413 NGUID/EUI64 Never Reused: No 00:20:22.413 Namespace Write Protected: No 00:20:22.413 Number of LBA Formats: 1 00:20:22.413 Current LBA Format: LBA Format #00 00:20:22.413 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:22.413 00:20:22.413 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:20:22.413 [2024-11-25 14:18:27.454249] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:27.817 Initializing NVMe Controllers 00:20:27.817 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:27.817 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:20:27.817 Initialization complete. Launching workers. 00:20:27.817 ======================================================== 00:20:27.818 Latency(us) 00:20:27.818 Device Information : IOPS MiB/s Average min max 00:20:27.818 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40004.98 156.27 3199.47 841.57 9772.48 00:20:27.818 ======================================================== 00:20:27.818 Total : 40004.98 156.27 3199.47 841.57 9772.48 00:20:27.818 00:20:27.818 [2024-11-25 14:18:32.560373] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:27.818 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:20:27.818 [2024-11-25 14:18:32.750997] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:33.105 Initializing NVMe Controllers 00:20:33.105 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:33.105 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:20:33.105 Initialization complete. Launching workers. 00:20:33.105 ======================================================== 00:20:33.105 Latency(us) 00:20:33.105 Device Information : IOPS MiB/s Average min max 00:20:33.105 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40025.40 156.35 3197.83 844.24 6809.85 00:20:33.105 ======================================================== 00:20:33.105 Total : 40025.40 156.35 3197.83 844.24 6809.85 00:20:33.105 00:20:33.105 [2024-11-25 14:18:37.770260] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:33.105 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:20:33.105 [2024-11-25 14:18:37.972537] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:38.383 [2024-11-25 14:18:43.119251] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:38.383 Initializing NVMe Controllers 00:20:38.383 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:38.383 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:38.383 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:20:38.383 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:20:38.383 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:20:38.383 Initialization complete. Launching workers. 00:20:38.383 Starting thread on core 2 00:20:38.383 Starting thread on core 3 00:20:38.383 Starting thread on core 1 00:20:38.383 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:20:38.383 [2024-11-25 14:18:43.384562] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:41.690 [2024-11-25 14:18:46.440541] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:41.690 Initializing NVMe Controllers 00:20:41.690 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:41.690 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:41.690 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:20:41.690 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:20:41.690 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:20:41.690 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:20:41.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:20:41.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:20:41.690 Initialization complete. Launching workers. 00:20:41.690 Starting thread on core 1 with urgent priority queue 00:20:41.690 Starting thread on core 2 with urgent priority queue 00:20:41.690 Starting thread on core 3 with urgent priority queue 00:20:41.690 Starting thread on core 0 with urgent priority queue 00:20:41.690 SPDK bdev Controller (SPDK2 ) core 0: 10889.33 IO/s 9.18 secs/100000 ios 00:20:41.690 SPDK bdev Controller (SPDK2 ) core 1: 9013.67 IO/s 11.09 secs/100000 ios 00:20:41.690 SPDK bdev Controller (SPDK2 ) core 2: 8713.67 IO/s 11.48 secs/100000 ios 00:20:41.690 SPDK bdev Controller (SPDK2 ) core 3: 8911.67 IO/s 11.22 secs/100000 ios 00:20:41.690 ======================================================== 00:20:41.690 00:20:41.690 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:41.690 [2024-11-25 14:18:46.682535] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:41.690 Initializing NVMe Controllers 00:20:41.690 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:41.690 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:41.690 Namespace ID: 1 size: 0GB 00:20:41.690 Initialization complete. 00:20:41.690 INFO: using host memory buffer for IO 00:20:41.690 Hello world! 00:20:41.690 [2024-11-25 14:18:46.692607] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:41.690 14:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:41.950 [2024-11-25 14:18:46.926567] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:43.336 Initializing NVMe Controllers 00:20:43.336 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:43.336 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:43.336 Initialization complete. Launching workers. 00:20:43.336 submit (in ns) avg, min, max = 6465.8, 2820.0, 3997667.5 00:20:43.336 complete (in ns) avg, min, max = 17614.2, 1633.3, 3997225.8 00:20:43.336 00:20:43.336 Submit histogram 00:20:43.336 ================ 00:20:43.336 Range in us Cumulative Count 00:20:43.336 2.813 - 2.827: 0.0583% ( 12) 00:20:43.336 2.827 - 2.840: 0.8945% ( 172) 00:20:43.336 2.840 - 2.853: 2.4112% ( 312) 00:20:43.336 2.853 - 2.867: 6.2953% ( 799) 00:20:43.336 2.867 - 2.880: 11.1370% ( 996) 00:20:43.336 2.880 - 2.893: 17.8115% ( 1373) 00:20:43.336 2.893 - 2.907: 23.3241% ( 1134) 00:20:43.336 2.907 - 2.920: 29.1478% ( 1198) 00:20:43.336 2.920 - 2.933: 34.2375% ( 1047) 00:20:43.336 2.933 - 2.947: 39.4196% ( 1066) 00:20:43.336 2.947 - 2.960: 45.2287% ( 1195) 00:20:43.336 2.960 - 2.973: 51.3295% ( 1255) 00:20:43.336 2.973 - 2.987: 57.2408% ( 1216) 00:20:43.336 2.987 - 3.000: 65.2521% ( 1648) 00:20:43.336 3.000 - 3.013: 74.1335% ( 1827) 00:20:43.336 3.013 - 3.027: 82.7330% ( 1769) 00:20:43.336 3.027 - 3.040: 89.4998% ( 1392) 00:20:43.336 3.040 - 3.053: 94.2686% ( 981) 00:20:43.336 3.053 - 3.067: 97.1562% ( 594) 00:20:43.336 3.067 - 3.080: 98.4930% ( 275) 00:20:43.336 3.080 - 3.093: 99.1444% ( 134) 00:20:43.337 3.093 - 3.107: 99.3875% ( 50) 00:20:43.337 3.107 - 3.120: 99.4507% ( 13) 00:20:43.337 3.120 - 3.133: 99.4944% ( 9) 00:20:43.337 3.147 - 3.160: 99.5090% ( 3) 00:20:43.337 3.267 - 3.280: 99.5187% ( 2) 00:20:43.337 3.307 - 3.320: 99.5285% ( 2) 00:20:43.337 3.440 - 3.467: 99.5333% ( 1) 00:20:43.337 3.520 - 3.547: 99.5382% ( 1) 00:20:43.337 3.627 - 3.653: 99.5430% ( 1) 00:20:43.337 3.653 - 3.680: 99.5479% ( 1) 00:20:43.337 3.760 - 3.787: 99.5528% ( 1) 00:20:43.337 3.840 - 3.867: 99.5576% ( 1) 00:20:43.337 3.893 - 3.920: 99.5625% ( 1) 00:20:43.337 4.027 - 4.053: 99.5674% ( 1) 00:20:43.337 4.107 - 4.133: 99.5722% ( 1) 00:20:43.337 4.133 - 4.160: 99.5771% ( 1) 00:20:43.337 4.293 - 4.320: 99.5819% ( 1) 00:20:43.337 4.347 - 4.373: 99.5868% ( 1) 00:20:43.337 4.400 - 4.427: 99.5917% ( 1) 00:20:43.337 4.507 - 4.533: 99.5965% ( 1) 00:20:43.337 4.560 - 4.587: 99.6062% ( 2) 00:20:43.337 4.587 - 4.613: 99.6111% ( 1) 00:20:43.337 4.613 - 4.640: 99.6160% ( 1) 00:20:43.337 4.640 - 4.667: 99.6208% ( 1) 00:20:43.337 4.693 - 4.720: 99.6257% ( 1) 00:20:43.337 4.720 - 4.747: 99.6403% ( 3) 00:20:43.337 4.747 - 4.773: 99.6500% ( 2) 00:20:43.337 4.773 - 4.800: 99.6549% ( 1) 00:20:43.337 4.827 - 4.853: 99.6743% ( 4) 00:20:43.337 4.880 - 4.907: 99.6792% ( 1) 00:20:43.337 4.907 - 4.933: 99.6840% ( 1) 00:20:43.337 4.960 - 4.987: 99.6889% ( 1) 00:20:43.337 4.987 - 5.013: 99.6986% ( 2) 00:20:43.337 5.013 - 5.040: 99.7180% ( 4) 00:20:43.337 5.040 - 5.067: 99.7278% ( 2) 00:20:43.337 5.093 - 5.120: 99.7326% ( 1) 00:20:43.337 5.120 - 5.147: 99.7375% ( 1) 00:20:43.337 5.147 - 5.173: 99.7521% ( 3) 00:20:43.337 5.173 - 5.200: 99.7667% ( 3) 00:20:43.337 5.200 - 5.227: 99.7715% ( 1) 00:20:43.337 5.227 - 5.253: 99.7764% ( 1) 00:20:43.337 5.253 - 5.280: 99.7861% ( 2) 00:20:43.337 5.307 - 5.333: 99.7910% ( 1) 00:20:43.337 5.360 - 5.387: 99.7958% ( 1) 00:20:43.337 5.387 - 5.413: 99.8153% ( 4) 00:20:43.337 5.440 - 5.467: 99.8201% ( 1) 00:20:43.337 5.547 - 5.573: 99.8250% ( 1) 00:20:43.337 5.573 - 5.600: 99.8299% ( 1) 00:20:43.337 5.600 - 5.627: 99.8347% ( 1) 00:20:43.337 5.653 - 5.680: 99.8444% ( 2) 00:20:43.337 5.680 - 5.707: 99.8493% ( 1) 00:20:43.337 5.707 - 5.733: 99.8542% ( 1) 00:20:43.337 5.840 - 5.867: 99.8590% ( 1) 00:20:43.337 5.893 - 5.920: 99.8639% ( 1) 00:20:43.337 5.920 - 5.947: 99.8687% ( 1) 00:20:43.337 6.027 - 6.053: 99.8736% ( 1) 00:20:43.337 6.213 - 6.240: 99.8785% ( 1) 00:20:43.337 6.293 - 6.320: 99.8833% ( 1) 00:20:43.337 [2024-11-25 14:18:48.017679] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:43.337 6.453 - 6.480: 99.8882% ( 1) 00:20:43.337 6.533 - 6.560: 99.8931% ( 1) 00:20:43.337 6.987 - 7.040: 99.8979% ( 1) 00:20:43.337 7.093 - 7.147: 99.9028% ( 1) 00:20:43.337 7.147 - 7.200: 99.9076% ( 1) 00:20:43.337 10.827 - 10.880: 99.9125% ( 1) 00:20:43.337 3986.773 - 4014.080: 100.0000% ( 18) 00:20:43.337 00:20:43.337 Complete histogram 00:20:43.337 ================== 00:20:43.337 Range in us Cumulative Count 00:20:43.337 1.633 - 1.640: 0.5736% ( 118) 00:20:43.337 1.640 - 1.647: 1.1327% ( 115) 00:20:43.337 1.647 - 1.653: 1.1861% ( 11) 00:20:43.337 1.653 - 1.660: 1.4098% ( 46) 00:20:43.337 1.660 - 1.667: 1.5021% ( 19) 00:20:43.337 1.667 - 1.673: 1.5410% ( 8) 00:20:43.337 1.673 - 1.680: 1.6674% ( 26) 00:20:43.337 1.680 - 1.687: 44.3294% ( 8776) 00:20:43.337 1.687 - 1.693: 52.3553% ( 1651) 00:20:43.337 1.693 - 1.700: 57.8727% ( 1135) 00:20:43.337 1.700 - 1.707: 71.0369% ( 2708) 00:20:43.337 1.707 - 1.720: 80.8177% ( 2012) 00:20:43.337 1.720 - 1.733: 83.1316% ( 476) 00:20:43.337 1.733 - 1.747: 85.9317% ( 576) 00:20:43.337 1.747 - 1.760: 90.9193% ( 1026) 00:20:43.337 1.760 - 1.773: 95.7805% ( 1000) 00:20:43.337 1.773 - 1.787: 98.1041% ( 478) 00:20:43.337 1.787 - 1.800: 99.1250% ( 210) 00:20:43.337 1.800 - 1.813: 99.4410% ( 65) 00:20:43.337 1.813 - 1.827: 99.4750% ( 7) 00:20:43.337 1.827 - 1.840: 99.4799% ( 1) 00:20:43.337 1.840 - 1.853: 99.4847% ( 1) 00:20:43.337 3.053 - 3.067: 99.4896% ( 1) 00:20:43.337 3.227 - 3.240: 99.4944% ( 1) 00:20:43.337 3.267 - 3.280: 99.4993% ( 1) 00:20:43.337 3.387 - 3.400: 99.5042% ( 1) 00:20:43.337 3.413 - 3.440: 99.5139% ( 2) 00:20:43.337 3.787 - 3.813: 99.5187% ( 1) 00:20:43.337 3.813 - 3.840: 99.5236% ( 1) 00:20:43.337 3.867 - 3.893: 99.5333% ( 2) 00:20:43.337 4.027 - 4.053: 99.5430% ( 2) 00:20:43.337 4.133 - 4.160: 99.5479% ( 1) 00:20:43.337 4.160 - 4.187: 99.5576% ( 2) 00:20:43.337 4.187 - 4.213: 99.5625% ( 1) 00:20:43.337 4.240 - 4.267: 99.5722% ( 2) 00:20:43.338 4.453 - 4.480: 99.5771% ( 1) 00:20:43.338 4.667 - 4.693: 99.5819% ( 1) 00:20:43.338 4.853 - 4.880: 99.5868% ( 1) 00:20:43.338 8.960 - 9.013: 99.5917% ( 1) 00:20:43.338 9.973 - 10.027: 99.5965% ( 1) 00:20:43.338 10.187 - 10.240: 99.6014% ( 1) 00:20:43.338 3741.013 - 3768.320: 99.6062% ( 1) 00:20:43.338 3986.773 - 4014.080: 100.0000% ( 81) 00:20:43.338 00:20:43.338 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:20:43.338 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:20:43.338 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:20:43.338 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:20:43.338 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:43.338 [ 00:20:43.338 { 00:20:43.338 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:43.338 "subtype": "Discovery", 00:20:43.338 "listen_addresses": [], 00:20:43.338 "allow_any_host": true, 00:20:43.338 "hosts": [] 00:20:43.338 }, 00:20:43.338 { 00:20:43.338 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:43.338 "subtype": "NVMe", 00:20:43.338 "listen_addresses": [ 00:20:43.338 { 00:20:43.338 "trtype": "VFIOUSER", 00:20:43.338 "adrfam": "IPv4", 00:20:43.338 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:43.338 "trsvcid": "0" 00:20:43.338 } 00:20:43.338 ], 00:20:43.338 "allow_any_host": true, 00:20:43.338 "hosts": [], 00:20:43.338 "serial_number": "SPDK1", 00:20:43.338 "model_number": "SPDK bdev Controller", 00:20:43.338 "max_namespaces": 32, 00:20:43.338 "min_cntlid": 1, 00:20:43.338 "max_cntlid": 65519, 00:20:43.338 "namespaces": [ 00:20:43.338 { 00:20:43.338 "nsid": 1, 00:20:43.338 "bdev_name": "Malloc1", 00:20:43.338 "name": "Malloc1", 00:20:43.338 "nguid": "6C1C264050184753B7E86DEF2B1BCA65", 00:20:43.338 "uuid": "6c1c2640-5018-4753-b7e8-6def2b1bca65" 00:20:43.338 }, 00:20:43.338 { 00:20:43.338 "nsid": 2, 00:20:43.338 "bdev_name": "Malloc3", 00:20:43.338 "name": "Malloc3", 00:20:43.338 "nguid": "106C1FFE25864AC493A3B980CE01FD0A", 00:20:43.338 "uuid": "106c1ffe-2586-4ac4-93a3-b980ce01fd0a" 00:20:43.338 } 00:20:43.338 ] 00:20:43.338 }, 00:20:43.338 { 00:20:43.338 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:43.338 "subtype": "NVMe", 00:20:43.338 "listen_addresses": [ 00:20:43.338 { 00:20:43.338 "trtype": "VFIOUSER", 00:20:43.338 "adrfam": "IPv4", 00:20:43.338 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:43.338 "trsvcid": "0" 00:20:43.338 } 00:20:43.338 ], 00:20:43.338 "allow_any_host": true, 00:20:43.338 "hosts": [], 00:20:43.338 "serial_number": "SPDK2", 00:20:43.338 "model_number": "SPDK bdev Controller", 00:20:43.338 "max_namespaces": 32, 00:20:43.338 "min_cntlid": 1, 00:20:43.338 "max_cntlid": 65519, 00:20:43.338 "namespaces": [ 00:20:43.338 { 00:20:43.338 "nsid": 1, 00:20:43.338 "bdev_name": "Malloc2", 00:20:43.338 "name": "Malloc2", 00:20:43.338 "nguid": "0C0FC25FC01A4C35922F9B18FC981BEF", 00:20:43.338 "uuid": "0c0fc25f-c01a-4c35-922f-9b18fc981bef" 00:20:43.338 } 00:20:43.338 ] 00:20:43.338 } 00:20:43.338 ] 00:20:43.338 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:43.338 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3393715 00:20:43.338 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:20:43.338 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:20:43.338 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:20:43.338 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:43.338 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:43.338 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:20:43.338 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:20:43.338 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:20:43.338 [2024-11-25 14:18:48.394587] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:43.599 Malloc4 00:20:43.599 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:20:43.599 [2024-11-25 14:18:48.605022] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:43.599 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:43.599 Asynchronous Event Request test 00:20:43.599 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:43.599 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:43.599 Registering asynchronous event callbacks... 00:20:43.599 Starting namespace attribute notice tests for all controllers... 00:20:43.599 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:43.599 aer_cb - Changed Namespace 00:20:43.599 Cleaning up... 00:20:43.860 [ 00:20:43.860 { 00:20:43.860 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:43.860 "subtype": "Discovery", 00:20:43.860 "listen_addresses": [], 00:20:43.860 "allow_any_host": true, 00:20:43.860 "hosts": [] 00:20:43.860 }, 00:20:43.860 { 00:20:43.860 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:43.860 "subtype": "NVMe", 00:20:43.860 "listen_addresses": [ 00:20:43.860 { 00:20:43.860 "trtype": "VFIOUSER", 00:20:43.860 "adrfam": "IPv4", 00:20:43.860 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:43.860 "trsvcid": "0" 00:20:43.860 } 00:20:43.860 ], 00:20:43.860 "allow_any_host": true, 00:20:43.860 "hosts": [], 00:20:43.860 "serial_number": "SPDK1", 00:20:43.860 "model_number": "SPDK bdev Controller", 00:20:43.860 "max_namespaces": 32, 00:20:43.860 "min_cntlid": 1, 00:20:43.860 "max_cntlid": 65519, 00:20:43.860 "namespaces": [ 00:20:43.860 { 00:20:43.860 "nsid": 1, 00:20:43.860 "bdev_name": "Malloc1", 00:20:43.860 "name": "Malloc1", 00:20:43.860 "nguid": "6C1C264050184753B7E86DEF2B1BCA65", 00:20:43.860 "uuid": "6c1c2640-5018-4753-b7e8-6def2b1bca65" 00:20:43.860 }, 00:20:43.860 { 00:20:43.860 "nsid": 2, 00:20:43.860 "bdev_name": "Malloc3", 00:20:43.860 "name": "Malloc3", 00:20:43.860 "nguid": "106C1FFE25864AC493A3B980CE01FD0A", 00:20:43.860 "uuid": "106c1ffe-2586-4ac4-93a3-b980ce01fd0a" 00:20:43.860 } 00:20:43.860 ] 00:20:43.860 }, 00:20:43.860 { 00:20:43.860 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:43.860 "subtype": "NVMe", 00:20:43.860 "listen_addresses": [ 00:20:43.860 { 00:20:43.860 "trtype": "VFIOUSER", 00:20:43.860 "adrfam": "IPv4", 00:20:43.860 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:43.860 "trsvcid": "0" 00:20:43.860 } 00:20:43.860 ], 00:20:43.860 "allow_any_host": true, 00:20:43.860 "hosts": [], 00:20:43.860 "serial_number": "SPDK2", 00:20:43.860 "model_number": "SPDK bdev Controller", 00:20:43.860 "max_namespaces": 32, 00:20:43.860 "min_cntlid": 1, 00:20:43.860 "max_cntlid": 65519, 00:20:43.860 "namespaces": [ 00:20:43.860 { 00:20:43.860 "nsid": 1, 00:20:43.860 "bdev_name": "Malloc2", 00:20:43.860 "name": "Malloc2", 00:20:43.860 "nguid": "0C0FC25FC01A4C35922F9B18FC981BEF", 00:20:43.860 "uuid": "0c0fc25f-c01a-4c35-922f-9b18fc981bef" 00:20:43.860 }, 00:20:43.860 { 00:20:43.860 "nsid": 2, 00:20:43.860 "bdev_name": "Malloc4", 00:20:43.860 "name": "Malloc4", 00:20:43.860 "nguid": "A3D17482AD694501B91F2968993221EB", 00:20:43.860 "uuid": "a3d17482-ad69-4501-b91f-2968993221eb" 00:20:43.860 } 00:20:43.860 ] 00:20:43.860 } 00:20:43.860 ] 00:20:43.860 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3393715 00:20:43.860 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:20:43.860 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3384400 00:20:43.860 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3384400 ']' 00:20:43.860 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3384400 00:20:43.860 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:20:43.860 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:43.860 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3384400 00:20:43.860 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:43.860 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:43.860 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3384400' 00:20:43.860 killing process with pid 3384400 00:20:43.860 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3384400 00:20:43.860 14:18:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3384400 00:20:44.121 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:44.121 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:44.121 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:20:44.121 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:20:44.121 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:20:44.121 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3393964 00:20:44.121 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3393964' 00:20:44.121 Process pid: 3393964 00:20:44.121 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:44.121 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:20:44.121 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3393964 00:20:44.121 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3393964 ']' 00:20:44.121 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.121 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.121 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.121 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.121 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:44.121 [2024-11-25 14:18:49.083999] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:20:44.121 [2024-11-25 14:18:49.084922] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:20:44.121 [2024-11-25 14:18:49.084962] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.121 [2024-11-25 14:18:49.169550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:44.121 [2024-11-25 14:18:49.199008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.121 [2024-11-25 14:18:49.199042] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.121 [2024-11-25 14:18:49.199049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.121 [2024-11-25 14:18:49.199054] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.121 [2024-11-25 14:18:49.199058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.121 [2024-11-25 14:18:49.200309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.121 [2024-11-25 14:18:49.200463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.121 [2024-11-25 14:18:49.200612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.121 [2024-11-25 14:18:49.200614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:44.381 [2024-11-25 14:18:49.251186] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:20:44.381 [2024-11-25 14:18:49.252135] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:20:44.381 [2024-11-25 14:18:49.253027] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:20:44.381 [2024-11-25 14:18:49.253809] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:20:44.381 [2024-11-25 14:18:49.253828] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:20:44.951 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.951 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:20:44.951 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:20:45.890 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:20:46.150 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:20:46.150 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:20:46.150 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:46.150 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:20:46.150 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:46.410 Malloc1 00:20:46.410 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:20:46.410 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:20:46.672 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:20:46.933 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:46.933 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:20:46.933 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:47.193 Malloc2 00:20:47.194 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:20:47.194 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:20:47.454 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:20:47.714 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:20:47.714 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3393964 00:20:47.714 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3393964 ']' 00:20:47.714 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3393964 00:20:47.714 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:20:47.714 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:47.714 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3393964 00:20:47.714 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:47.714 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:47.714 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3393964' 00:20:47.714 killing process with pid 3393964 00:20:47.714 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3393964 00:20:47.714 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3393964 00:20:47.714 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:47.714 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:47.714 00:20:47.714 real 0m50.962s 00:20:47.714 user 3m15.380s 00:20:47.714 sys 0m2.669s 00:20:47.714 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:47.714 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:47.714 ************************************ 00:20:47.714 END TEST nvmf_vfio_user 00:20:47.715 ************************************ 00:20:47.976 14:18:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:47.976 14:18:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:47.976 14:18:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:47.976 14:18:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:47.976 ************************************ 00:20:47.976 START TEST nvmf_vfio_user_nvme_compliance 00:20:47.976 ************************************ 00:20:47.976 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:47.976 * Looking for test storage... 00:20:47.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:20:47.976 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:47.976 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:20:47.976 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:47.976 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:47.976 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:47.976 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:47.976 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:47.976 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:20:47.976 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:20:47.976 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:20:47.976 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:20:47.976 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:20:47.976 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:20:47.977 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:20:47.977 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:47.977 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:20:47.977 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:20:47.977 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:47.977 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:47.977 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:20:47.977 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:20:47.977 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:47.977 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:20:47.977 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:20:47.977 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:20:47.977 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:20:47.977 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:47.977 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:20:47.977 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:20:47.977 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:47.977 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:47.977 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:20:47.977 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:47.977 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:47.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.977 --rc genhtml_branch_coverage=1 00:20:47.977 --rc genhtml_function_coverage=1 00:20:47.977 --rc genhtml_legend=1 00:20:47.977 --rc geninfo_all_blocks=1 00:20:47.977 --rc geninfo_unexecuted_blocks=1 00:20:47.977 00:20:47.977 ' 00:20:47.977 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:47.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.977 --rc genhtml_branch_coverage=1 00:20:47.977 --rc genhtml_function_coverage=1 00:20:47.977 --rc genhtml_legend=1 00:20:47.977 --rc geninfo_all_blocks=1 00:20:47.977 --rc geninfo_unexecuted_blocks=1 00:20:47.977 00:20:47.977 ' 00:20:48.238 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:48.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.238 --rc genhtml_branch_coverage=1 00:20:48.238 --rc genhtml_function_coverage=1 00:20:48.238 --rc genhtml_legend=1 00:20:48.238 --rc geninfo_all_blocks=1 00:20:48.238 --rc geninfo_unexecuted_blocks=1 00:20:48.238 00:20:48.238 ' 00:20:48.238 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:48.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.238 --rc genhtml_branch_coverage=1 00:20:48.238 --rc genhtml_function_coverage=1 00:20:48.238 --rc genhtml_legend=1 00:20:48.238 --rc geninfo_all_blocks=1 00:20:48.238 --rc geninfo_unexecuted_blocks=1 00:20:48.238 00:20:48.238 ' 00:20:48.238 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:48.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3394723 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3394723' 00:20:48.239 Process pid: 3394723 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3394723 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 3394723 ']' 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.239 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:48.239 [2024-11-25 14:18:53.166686] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:20:48.239 [2024-11-25 14:18:53.166776] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.240 [2024-11-25 14:18:53.253766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:48.240 [2024-11-25 14:18:53.288656] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.240 [2024-11-25 14:18:53.288686] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.240 [2024-11-25 14:18:53.288692] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.240 [2024-11-25 14:18:53.288696] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.240 [2024-11-25 14:18:53.288701] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.240 [2024-11-25 14:18:53.289818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.240 [2024-11-25 14:18:53.289934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.240 [2024-11-25 14:18:53.289936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:49.182 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:49.182 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:20:49.182 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:20:50.124 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:50.124 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:20:50.124 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:50.124 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.124 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:50.124 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.124 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:20:50.124 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:50.124 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.124 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:50.124 malloc0 00:20:50.124 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.124 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:20:50.124 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.124 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:50.124 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.124 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:50.124 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.124 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:50.124 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.124 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:50.124 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.124 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:50.124 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.124 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:20:50.124 00:20:50.124 00:20:50.124 CUnit - A unit testing framework for C - Version 2.1-3 00:20:50.124 http://cunit.sourceforge.net/ 00:20:50.124 00:20:50.124 00:20:50.124 Suite: nvme_compliance 00:20:50.124 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-25 14:18:55.192544] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:50.124 [2024-11-25 14:18:55.193834] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:20:50.124 [2024-11-25 14:18:55.193845] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:20:50.124 [2024-11-25 14:18:55.193850] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:20:50.124 [2024-11-25 14:18:55.195566] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:50.384 passed 00:20:50.384 Test: admin_identify_ctrlr_verify_fused ...[2024-11-25 14:18:55.275080] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:50.384 [2024-11-25 14:18:55.278104] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:50.384 passed 00:20:50.384 Test: admin_identify_ns ...[2024-11-25 14:18:55.349605] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:50.384 [2024-11-25 14:18:55.410169] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:20:50.384 [2024-11-25 14:18:55.418172] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:20:50.384 [2024-11-25 14:18:55.439245] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:50.384 passed 00:20:50.644 Test: admin_get_features_mandatory_features ...[2024-11-25 14:18:55.513445] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:50.644 [2024-11-25 14:18:55.516469] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:50.644 passed 00:20:50.644 Test: admin_get_features_optional_features ...[2024-11-25 14:18:55.592917] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:50.644 [2024-11-25 14:18:55.595936] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:50.644 passed 00:20:50.644 Test: admin_set_features_number_of_queues ...[2024-11-25 14:18:55.671538] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:50.904 [2024-11-25 14:18:55.776253] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:50.904 passed 00:20:50.904 Test: admin_get_log_page_mandatory_logs ...[2024-11-25 14:18:55.851279] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:50.904 [2024-11-25 14:18:55.855301] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:50.904 passed 00:20:50.904 Test: admin_get_log_page_with_lpo ...[2024-11-25 14:18:55.929008] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:51.163 [2024-11-25 14:18:55.997169] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:20:51.163 [2024-11-25 14:18:56.010213] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:51.163 passed 00:20:51.163 Test: fabric_property_get ...[2024-11-25 14:18:56.084431] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:51.163 [2024-11-25 14:18:56.085632] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:20:51.163 [2024-11-25 14:18:56.087453] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:51.163 passed 00:20:51.163 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-25 14:18:56.163923] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:51.163 [2024-11-25 14:18:56.165121] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:20:51.163 [2024-11-25 14:18:56.166938] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:51.163 passed 00:20:51.163 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-25 14:18:56.241503] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:51.424 [2024-11-25 14:18:56.325164] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:51.424 [2024-11-25 14:18:56.341162] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:51.424 [2024-11-25 14:18:56.349259] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:51.424 passed 00:20:51.424 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-25 14:18:56.420509] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:51.424 [2024-11-25 14:18:56.421707] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:20:51.424 [2024-11-25 14:18:56.423530] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:51.424 passed 00:20:51.424 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-25 14:18:56.502250] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:51.683 [2024-11-25 14:18:56.576173] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:51.684 [2024-11-25 14:18:56.600163] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:51.684 [2024-11-25 14:18:56.605229] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:51.684 passed 00:20:51.684 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-25 14:18:56.680277] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:51.684 [2024-11-25 14:18:56.681478] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:20:51.684 [2024-11-25 14:18:56.681496] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:20:51.684 [2024-11-25 14:18:56.683300] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:51.684 passed 00:20:51.684 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-25 14:18:56.759529] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:51.944 [2024-11-25 14:18:56.851165] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:20:51.944 [2024-11-25 14:18:56.859163] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:20:51.944 [2024-11-25 14:18:56.867164] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:20:51.944 [2024-11-25 14:18:56.875173] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:20:51.944 [2024-11-25 14:18:56.904235] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:51.944 passed 00:20:51.944 Test: admin_create_io_sq_verify_pc ...[2024-11-25 14:18:56.980281] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:51.944 [2024-11-25 14:18:56.998170] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:20:51.944 [2024-11-25 14:18:57.015438] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:52.205 passed 00:20:52.205 Test: admin_create_io_qp_max_qps ...[2024-11-25 14:18:57.088871] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:53.151 [2024-11-25 14:18:58.209167] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:20:53.722 [2024-11-25 14:18:58.596738] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:53.722 passed 00:20:53.722 Test: admin_create_io_sq_shared_cq ...[2024-11-25 14:18:58.669547] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:53.722 [2024-11-25 14:18:58.805169] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:53.983 [2024-11-25 14:18:58.842213] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:53.983 passed 00:20:53.983 00:20:53.983 Run Summary: Type Total Ran Passed Failed Inactive 00:20:53.983 suites 1 1 n/a 0 0 00:20:53.983 tests 18 18 18 0 0 00:20:53.983 asserts 360 360 360 0 n/a 00:20:53.983 00:20:53.983 Elapsed time = 1.503 seconds 00:20:53.983 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3394723 00:20:53.983 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 3394723 ']' 00:20:53.983 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 3394723 00:20:53.983 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:20:53.983 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:53.983 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3394723 00:20:53.983 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:53.983 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:53.983 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3394723' 00:20:53.983 killing process with pid 3394723 00:20:53.983 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 3394723 00:20:53.983 14:18:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 3394723 00:20:53.983 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:20:53.983 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:20:53.983 00:20:53.983 real 0m6.194s 00:20:53.983 user 0m17.576s 00:20:53.983 sys 0m0.517s 00:20:53.983 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:53.983 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:53.983 ************************************ 00:20:53.983 END TEST nvmf_vfio_user_nvme_compliance 00:20:53.983 ************************************ 00:20:54.244 14:18:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:54.244 14:18:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:54.244 14:18:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:54.244 14:18:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:54.244 ************************************ 00:20:54.244 START TEST nvmf_vfio_user_fuzz 00:20:54.244 ************************************ 00:20:54.244 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:54.244 * Looking for test storage... 00:20:54.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:54.244 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:54.244 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:20:54.244 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:54.244 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:54.244 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:54.244 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:54.244 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:54.244 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:20:54.244 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:20:54.244 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:20:54.244 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:20:54.244 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:20:54.244 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:20:54.244 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:20:54.244 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:54.244 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:20:54.244 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:20:54.244 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:54.244 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:54.244 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:54.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.507 --rc genhtml_branch_coverage=1 00:20:54.507 --rc genhtml_function_coverage=1 00:20:54.507 --rc genhtml_legend=1 00:20:54.507 --rc geninfo_all_blocks=1 00:20:54.507 --rc geninfo_unexecuted_blocks=1 00:20:54.507 00:20:54.507 ' 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:54.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.507 --rc genhtml_branch_coverage=1 00:20:54.507 --rc genhtml_function_coverage=1 00:20:54.507 --rc genhtml_legend=1 00:20:54.507 --rc geninfo_all_blocks=1 00:20:54.507 --rc geninfo_unexecuted_blocks=1 00:20:54.507 00:20:54.507 ' 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:54.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.507 --rc genhtml_branch_coverage=1 00:20:54.507 --rc genhtml_function_coverage=1 00:20:54.507 --rc genhtml_legend=1 00:20:54.507 --rc geninfo_all_blocks=1 00:20:54.507 --rc geninfo_unexecuted_blocks=1 00:20:54.507 00:20:54.507 ' 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:54.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.507 --rc genhtml_branch_coverage=1 00:20:54.507 --rc genhtml_function_coverage=1 00:20:54.507 --rc genhtml_legend=1 00:20:54.507 --rc geninfo_all_blocks=1 00:20:54.507 --rc geninfo_unexecuted_blocks=1 00:20:54.507 00:20:54.507 ' 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:54.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3396121 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3396121' 00:20:54.507 Process pid: 3396121 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:54.507 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3396121 00:20:54.508 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3396121 ']' 00:20:54.508 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.508 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:54.508 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.508 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:54.508 14:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:55.451 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.451 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:20:55.451 14:19:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:20:56.393 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:56.393 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.393 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:56.393 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.393 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:20:56.393 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:56.393 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.393 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:56.393 malloc0 00:20:56.393 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.393 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:20:56.393 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.393 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:56.393 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.393 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:56.393 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.393 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:56.393 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.393 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:56.393 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.393 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:56.393 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.393 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:20:56.393 14:19:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:21:28.523 Fuzzing completed. Shutting down the fuzz application 00:21:28.523 00:21:28.523 Dumping successful admin opcodes: 00:21:28.523 8, 9, 10, 24, 00:21:28.523 Dumping successful io opcodes: 00:21:28.523 0, 00:21:28.523 NS: 0x20000081ef00 I/O qp, Total commands completed: 1422749, total successful commands: 5591, random_seed: 2802808704 00:21:28.523 NS: 0x20000081ef00 admin qp, Total commands completed: 353538, total successful commands: 2848, random_seed: 757458496 00:21:28.523 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:21:28.523 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.523 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:28.523 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.523 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3396121 00:21:28.523 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3396121 ']' 00:21:28.523 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 3396121 00:21:28.523 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:21:28.523 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:28.523 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3396121 00:21:28.523 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:28.523 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:28.523 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3396121' 00:21:28.523 killing process with pid 3396121 00:21:28.523 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 3396121 00:21:28.523 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 3396121 00:21:28.523 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:21:28.523 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:21:28.523 00:21:28.523 real 0m32.784s 00:21:28.523 user 0m37.869s 00:21:28.523 sys 0m24.390s 00:21:28.523 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:28.523 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:28.523 ************************************ 00:21:28.523 END TEST nvmf_vfio_user_fuzz 00:21:28.523 ************************************ 00:21:28.523 14:19:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:28.523 14:19:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:28.523 14:19:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:28.523 14:19:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:28.523 ************************************ 00:21:28.523 START TEST nvmf_auth_target 00:21:28.523 ************************************ 00:21:28.523 14:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:28.523 * Looking for test storage... 00:21:28.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:28.523 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:28.523 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:21:28.523 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:28.523 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:28.523 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:28.523 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:28.523 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:28.523 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:28.523 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:28.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.524 --rc genhtml_branch_coverage=1 00:21:28.524 --rc genhtml_function_coverage=1 00:21:28.524 --rc genhtml_legend=1 00:21:28.524 --rc geninfo_all_blocks=1 00:21:28.524 --rc geninfo_unexecuted_blocks=1 00:21:28.524 00:21:28.524 ' 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:28.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.524 --rc genhtml_branch_coverage=1 00:21:28.524 --rc genhtml_function_coverage=1 00:21:28.524 --rc genhtml_legend=1 00:21:28.524 --rc geninfo_all_blocks=1 00:21:28.524 --rc geninfo_unexecuted_blocks=1 00:21:28.524 00:21:28.524 ' 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:28.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.524 --rc genhtml_branch_coverage=1 00:21:28.524 --rc genhtml_function_coverage=1 00:21:28.524 --rc genhtml_legend=1 00:21:28.524 --rc geninfo_all_blocks=1 00:21:28.524 --rc geninfo_unexecuted_blocks=1 00:21:28.524 00:21:28.524 ' 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:28.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.524 --rc genhtml_branch_coverage=1 00:21:28.524 --rc genhtml_function_coverage=1 00:21:28.524 --rc genhtml_legend=1 00:21:28.524 --rc geninfo_all_blocks=1 00:21:28.524 --rc geninfo_unexecuted_blocks=1 00:21:28.524 00:21:28.524 ' 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.524 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:28.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:21:28.525 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.112 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:35.113 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:35.113 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:35.113 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:35.113 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:35.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:21:35.113 00:21:35.113 --- 10.0.0.2 ping statistics --- 00:21:35.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.113 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:35.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:21:35.113 00:21:35.113 --- 10.0.0.1 ping statistics --- 00:21:35.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.113 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:21:35.113 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:35.114 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:35.114 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.114 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3406124 00:21:35.114 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3406124 00:21:35.114 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:21:35.114 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3406124 ']' 00:21:35.114 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.114 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.114 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.114 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.114 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.685 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.685 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:35.685 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:35.685 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:35.685 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.685 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.685 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3406283 00:21:35.685 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:35.685 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:21:35.685 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:21:35.685 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:35.685 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:35.685 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:35.685 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:21:35.685 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:35.685 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:35.685 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e2fc3993878b7f868d5b60dc6aeaa9ac81c4ab8bc19a445d 00:21:35.685 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:35.685 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.aGc 00:21:35.685 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e2fc3993878b7f868d5b60dc6aeaa9ac81c4ab8bc19a445d 0 00:21:35.685 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e2fc3993878b7f868d5b60dc6aeaa9ac81c4ab8bc19a445d 0 00:21:35.685 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:35.685 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:35.685 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e2fc3993878b7f868d5b60dc6aeaa9ac81c4ab8bc19a445d 00:21:35.685 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:21:35.685 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:35.946 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.aGc 00:21:35.946 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.aGc 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.aGc 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1184f62d087be2b8e0a2f34f4180eebfc3bac0958af5d167c42d20360b526d1a 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.NAN 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1184f62d087be2b8e0a2f34f4180eebfc3bac0958af5d167c42d20360b526d1a 3 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1184f62d087be2b8e0a2f34f4180eebfc3bac0958af5d167c42d20360b526d1a 3 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1184f62d087be2b8e0a2f34f4180eebfc3bac0958af5d167c42d20360b526d1a 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.NAN 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.NAN 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.NAN 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b4096c1c0a8753dffac43f37e94120df 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.xVP 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b4096c1c0a8753dffac43f37e94120df 1 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b4096c1c0a8753dffac43f37e94120df 1 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b4096c1c0a8753dffac43f37e94120df 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.xVP 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.xVP 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.xVP 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=93e440ed053a81ff36dd6ffa742de6d840fa9b5f07ef72c8 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.tg4 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 93e440ed053a81ff36dd6ffa742de6d840fa9b5f07ef72c8 2 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 93e440ed053a81ff36dd6ffa742de6d840fa9b5f07ef72c8 2 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=93e440ed053a81ff36dd6ffa742de6d840fa9b5f07ef72c8 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.tg4 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.tg4 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.tg4 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:35.947 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d6d33273fabdc03448bc34f30af5b3bc495745814ade5a5c 00:21:35.947 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:35.947 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.CEy 00:21:35.947 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d6d33273fabdc03448bc34f30af5b3bc495745814ade5a5c 2 00:21:35.947 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d6d33273fabdc03448bc34f30af5b3bc495745814ade5a5c 2 00:21:35.947 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:35.947 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:35.947 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d6d33273fabdc03448bc34f30af5b3bc495745814ade5a5c 00:21:35.947 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:21:35.947 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.CEy 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.CEy 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.CEy 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=45001cfc578b966e6b2ecce2ba07079a 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.otD 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 45001cfc578b966e6b2ecce2ba07079a 1 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 45001cfc578b966e6b2ecce2ba07079a 1 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=45001cfc578b966e6b2ecce2ba07079a 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.otD 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.otD 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.otD 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=44229bcddebd2faaf28ba0c7e62a6eb25766ac3c1b82cd166115cdb1a0f00e2a 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Ci8 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 44229bcddebd2faaf28ba0c7e62a6eb25766ac3c1b82cd166115cdb1a0f00e2a 3 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 44229bcddebd2faaf28ba0c7e62a6eb25766ac3c1b82cd166115cdb1a0f00e2a 3 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=44229bcddebd2faaf28ba0c7e62a6eb25766ac3c1b82cd166115cdb1a0f00e2a 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Ci8 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Ci8 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Ci8 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3406124 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3406124 ']' 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:36.209 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.470 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.470 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:36.470 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3406283 /var/tmp/host.sock 00:21:36.470 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3406283 ']' 00:21:36.470 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:21:36.470 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:36.470 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:36.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:36.470 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:36.470 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.730 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.730 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:36.730 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:21:36.730 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.730 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.731 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.731 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:36.731 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.aGc 00:21:36.731 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.731 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.731 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.731 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.aGc 00:21:36.731 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.aGc 00:21:36.731 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.NAN ]] 00:21:36.731 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.NAN 00:21:36.731 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.731 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.731 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.731 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.NAN 00:21:36.731 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.NAN 00:21:36.993 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:36.993 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.xVP 00:21:36.993 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.993 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.993 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.993 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.xVP 00:21:36.993 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.xVP 00:21:37.254 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.tg4 ]] 00:21:37.254 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tg4 00:21:37.254 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.254 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.254 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.254 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tg4 00:21:37.254 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tg4 00:21:37.515 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:37.515 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.CEy 00:21:37.515 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.515 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.515 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.515 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.CEy 00:21:37.515 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.CEy 00:21:37.515 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.otD ]] 00:21:37.515 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.otD 00:21:37.515 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.515 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.515 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.515 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.otD 00:21:37.515 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.otD 00:21:37.775 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:37.775 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Ci8 00:21:37.775 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.776 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.776 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.776 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Ci8 00:21:37.776 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Ci8 00:21:38.036 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:21:38.036 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:38.036 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:38.036 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.036 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:38.036 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:38.036 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:21:38.036 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.036 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:38.036 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:38.036 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:38.036 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.036 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.036 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.036 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.036 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.036 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.036 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.037 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.297 00:21:38.297 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.297 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.297 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.557 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.557 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.557 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.557 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.557 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.557 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.557 { 00:21:38.557 "cntlid": 1, 00:21:38.557 "qid": 0, 00:21:38.557 "state": "enabled", 00:21:38.557 "thread": "nvmf_tgt_poll_group_000", 00:21:38.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:38.557 "listen_address": { 00:21:38.557 "trtype": "TCP", 00:21:38.557 "adrfam": "IPv4", 00:21:38.557 "traddr": "10.0.0.2", 00:21:38.557 "trsvcid": "4420" 00:21:38.557 }, 00:21:38.557 "peer_address": { 00:21:38.557 "trtype": "TCP", 00:21:38.557 "adrfam": "IPv4", 00:21:38.557 "traddr": "10.0.0.1", 00:21:38.557 "trsvcid": "47880" 00:21:38.557 }, 00:21:38.557 "auth": { 00:21:38.557 "state": "completed", 00:21:38.557 "digest": "sha256", 00:21:38.557 "dhgroup": "null" 00:21:38.557 } 00:21:38.557 } 00:21:38.557 ]' 00:21:38.557 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.557 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:38.557 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.818 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:38.818 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.818 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.819 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.819 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.079 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:21:39.079 14:19:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:21:39.649 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.649 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:39.649 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.649 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.649 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.649 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.649 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:39.649 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:39.909 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:21:39.909 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.909 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:39.909 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:39.909 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:39.909 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.909 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.909 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.909 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.909 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.909 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.909 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.909 14:19:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.169 00:21:40.169 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.169 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.169 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.169 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.169 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.169 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.169 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.169 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.169 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.169 { 00:21:40.169 "cntlid": 3, 00:21:40.169 "qid": 0, 00:21:40.169 "state": "enabled", 00:21:40.169 "thread": "nvmf_tgt_poll_group_000", 00:21:40.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:40.169 "listen_address": { 00:21:40.169 "trtype": "TCP", 00:21:40.169 "adrfam": "IPv4", 00:21:40.169 "traddr": "10.0.0.2", 00:21:40.169 "trsvcid": "4420" 00:21:40.169 }, 00:21:40.169 "peer_address": { 00:21:40.169 "trtype": "TCP", 00:21:40.169 "adrfam": "IPv4", 00:21:40.169 "traddr": "10.0.0.1", 00:21:40.169 "trsvcid": "38344" 00:21:40.169 }, 00:21:40.169 "auth": { 00:21:40.169 "state": "completed", 00:21:40.169 "digest": "sha256", 00:21:40.169 "dhgroup": "null" 00:21:40.169 } 00:21:40.169 } 00:21:40.169 ]' 00:21:40.169 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.429 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:40.429 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.429 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:40.429 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.429 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.429 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.429 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.689 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:21:40.689 14:19:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:21:41.260 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.260 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:41.260 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.260 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.260 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.260 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.260 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:41.260 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:41.520 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:21:41.521 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.521 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:41.521 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:41.521 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:41.521 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.521 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.521 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.521 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.521 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.521 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.521 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.521 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.781 00:21:41.781 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.781 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.781 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.781 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.781 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.781 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.781 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.781 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.781 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.781 { 00:21:41.781 "cntlid": 5, 00:21:41.781 "qid": 0, 00:21:41.781 "state": "enabled", 00:21:41.781 "thread": "nvmf_tgt_poll_group_000", 00:21:41.781 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:41.781 "listen_address": { 00:21:41.781 "trtype": "TCP", 00:21:41.781 "adrfam": "IPv4", 00:21:41.781 "traddr": "10.0.0.2", 00:21:41.781 "trsvcid": "4420" 00:21:41.781 }, 00:21:41.781 "peer_address": { 00:21:41.781 "trtype": "TCP", 00:21:41.781 "adrfam": "IPv4", 00:21:41.781 "traddr": "10.0.0.1", 00:21:41.781 "trsvcid": "38362" 00:21:41.781 }, 00:21:41.781 "auth": { 00:21:41.781 "state": "completed", 00:21:41.781 "digest": "sha256", 00:21:41.781 "dhgroup": "null" 00:21:41.781 } 00:21:41.781 } 00:21:41.781 ]' 00:21:41.781 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.041 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:42.041 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.041 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:42.041 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.041 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.041 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.041 14:19:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.301 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:21:42.301 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:21:42.870 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.870 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:42.871 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.871 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.871 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.871 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.871 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:42.871 14:19:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:43.131 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:21:43.131 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.131 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:43.131 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:43.131 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:43.131 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.131 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:43.131 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.131 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.131 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.131 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:43.131 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:43.131 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:43.392 00:21:43.392 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.392 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.392 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.392 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.392 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.392 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.392 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.392 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.392 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.392 { 00:21:43.392 "cntlid": 7, 00:21:43.392 "qid": 0, 00:21:43.392 "state": "enabled", 00:21:43.392 "thread": "nvmf_tgt_poll_group_000", 00:21:43.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:43.392 "listen_address": { 00:21:43.392 "trtype": "TCP", 00:21:43.392 "adrfam": "IPv4", 00:21:43.392 "traddr": "10.0.0.2", 00:21:43.392 "trsvcid": "4420" 00:21:43.392 }, 00:21:43.392 "peer_address": { 00:21:43.392 "trtype": "TCP", 00:21:43.392 "adrfam": "IPv4", 00:21:43.392 "traddr": "10.0.0.1", 00:21:43.392 "trsvcid": "38384" 00:21:43.392 }, 00:21:43.392 "auth": { 00:21:43.392 "state": "completed", 00:21:43.392 "digest": "sha256", 00:21:43.392 "dhgroup": "null" 00:21:43.392 } 00:21:43.392 } 00:21:43.392 ]' 00:21:43.392 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.653 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:43.653 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.653 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:43.653 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.653 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.653 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.653 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.914 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:21:43.914 14:19:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:21:44.551 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.551 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:44.551 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.551 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.551 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.552 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:44.552 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.552 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:44.552 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:44.861 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:21:44.861 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.861 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:44.861 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:44.861 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:44.861 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.861 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.861 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.861 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.861 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.861 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.861 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.861 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.861 00:21:44.861 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.861 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.861 14:19:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.160 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.160 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.160 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.160 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.160 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.160 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.160 { 00:21:45.160 "cntlid": 9, 00:21:45.160 "qid": 0, 00:21:45.160 "state": "enabled", 00:21:45.160 "thread": "nvmf_tgt_poll_group_000", 00:21:45.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:45.160 "listen_address": { 00:21:45.160 "trtype": "TCP", 00:21:45.160 "adrfam": "IPv4", 00:21:45.160 "traddr": "10.0.0.2", 00:21:45.160 "trsvcid": "4420" 00:21:45.160 }, 00:21:45.160 "peer_address": { 00:21:45.160 "trtype": "TCP", 00:21:45.160 "adrfam": "IPv4", 00:21:45.160 "traddr": "10.0.0.1", 00:21:45.160 "trsvcid": "38402" 00:21:45.160 }, 00:21:45.160 "auth": { 00:21:45.160 "state": "completed", 00:21:45.160 "digest": "sha256", 00:21:45.160 "dhgroup": "ffdhe2048" 00:21:45.160 } 00:21:45.160 } 00:21:45.160 ]' 00:21:45.160 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.160 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:45.160 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.160 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:45.160 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.160 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.160 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.160 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.444 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:21:45.444 14:19:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:21:46.018 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.018 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:46.018 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.018 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.018 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.018 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.018 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:46.018 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:46.279 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:21:46.279 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.279 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:46.279 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:46.279 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:46.279 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.279 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.279 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.279 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.279 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.279 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.279 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.279 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.540 00:21:46.540 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.540 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.540 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.800 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.800 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.800 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.800 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.800 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.800 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.800 { 00:21:46.800 "cntlid": 11, 00:21:46.800 "qid": 0, 00:21:46.800 "state": "enabled", 00:21:46.800 "thread": "nvmf_tgt_poll_group_000", 00:21:46.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:46.800 "listen_address": { 00:21:46.800 "trtype": "TCP", 00:21:46.800 "adrfam": "IPv4", 00:21:46.800 "traddr": "10.0.0.2", 00:21:46.800 "trsvcid": "4420" 00:21:46.800 }, 00:21:46.800 "peer_address": { 00:21:46.800 "trtype": "TCP", 00:21:46.800 "adrfam": "IPv4", 00:21:46.800 "traddr": "10.0.0.1", 00:21:46.800 "trsvcid": "38434" 00:21:46.800 }, 00:21:46.800 "auth": { 00:21:46.800 "state": "completed", 00:21:46.800 "digest": "sha256", 00:21:46.800 "dhgroup": "ffdhe2048" 00:21:46.800 } 00:21:46.800 } 00:21:46.800 ]' 00:21:46.800 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.800 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:46.800 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.800 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:46.800 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.800 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.800 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.800 14:19:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.060 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:21:47.060 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:21:47.631 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.631 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:47.631 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.631 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.631 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.631 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.631 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:47.631 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:47.892 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:21:47.892 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.892 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:47.892 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:47.892 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:47.892 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.892 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.892 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.892 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.892 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.892 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.892 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.892 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.153 00:21:48.153 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.153 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.153 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.415 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.415 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.415 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.415 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.415 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.415 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.415 { 00:21:48.415 "cntlid": 13, 00:21:48.415 "qid": 0, 00:21:48.415 "state": "enabled", 00:21:48.415 "thread": "nvmf_tgt_poll_group_000", 00:21:48.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:48.415 "listen_address": { 00:21:48.415 "trtype": "TCP", 00:21:48.415 "adrfam": "IPv4", 00:21:48.415 "traddr": "10.0.0.2", 00:21:48.415 "trsvcid": "4420" 00:21:48.415 }, 00:21:48.415 "peer_address": { 00:21:48.415 "trtype": "TCP", 00:21:48.415 "adrfam": "IPv4", 00:21:48.415 "traddr": "10.0.0.1", 00:21:48.415 "trsvcid": "38466" 00:21:48.415 }, 00:21:48.415 "auth": { 00:21:48.415 "state": "completed", 00:21:48.415 "digest": "sha256", 00:21:48.415 "dhgroup": "ffdhe2048" 00:21:48.415 } 00:21:48.415 } 00:21:48.415 ]' 00:21:48.415 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.415 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:48.415 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.415 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:48.415 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.415 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.415 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.415 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.676 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:21:48.676 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:21:49.246 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.246 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:49.246 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.246 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.246 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.246 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.246 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:49.246 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:49.506 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:21:49.506 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.506 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:49.506 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:49.506 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:49.506 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.506 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:49.506 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.506 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.506 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.506 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:49.506 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:49.506 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:49.767 00:21:49.767 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.767 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.767 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.028 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.028 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.028 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.028 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.028 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.028 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.028 { 00:21:50.028 "cntlid": 15, 00:21:50.028 "qid": 0, 00:21:50.028 "state": "enabled", 00:21:50.028 "thread": "nvmf_tgt_poll_group_000", 00:21:50.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:50.028 "listen_address": { 00:21:50.028 "trtype": "TCP", 00:21:50.028 "adrfam": "IPv4", 00:21:50.028 "traddr": "10.0.0.2", 00:21:50.028 "trsvcid": "4420" 00:21:50.028 }, 00:21:50.028 "peer_address": { 00:21:50.028 "trtype": "TCP", 00:21:50.028 "adrfam": "IPv4", 00:21:50.028 "traddr": "10.0.0.1", 00:21:50.028 "trsvcid": "38490" 00:21:50.028 }, 00:21:50.028 "auth": { 00:21:50.028 "state": "completed", 00:21:50.028 "digest": "sha256", 00:21:50.028 "dhgroup": "ffdhe2048" 00:21:50.028 } 00:21:50.028 } 00:21:50.028 ]' 00:21:50.028 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.028 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:50.028 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.028 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:50.028 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.028 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.028 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.028 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.287 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:21:50.287 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:21:50.857 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.857 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:50.857 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.857 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.857 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.858 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:50.858 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.858 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:50.858 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:51.118 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:21:51.118 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.118 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:51.118 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:51.118 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:51.118 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.118 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.118 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.118 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.118 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.118 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.118 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.118 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.379 00:21:51.379 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.379 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.379 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.379 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.379 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.379 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.379 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.641 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.641 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.641 { 00:21:51.641 "cntlid": 17, 00:21:51.641 "qid": 0, 00:21:51.641 "state": "enabled", 00:21:51.641 "thread": "nvmf_tgt_poll_group_000", 00:21:51.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:51.641 "listen_address": { 00:21:51.641 "trtype": "TCP", 00:21:51.641 "adrfam": "IPv4", 00:21:51.641 "traddr": "10.0.0.2", 00:21:51.641 "trsvcid": "4420" 00:21:51.641 }, 00:21:51.641 "peer_address": { 00:21:51.641 "trtype": "TCP", 00:21:51.641 "adrfam": "IPv4", 00:21:51.641 "traddr": "10.0.0.1", 00:21:51.641 "trsvcid": "46096" 00:21:51.641 }, 00:21:51.641 "auth": { 00:21:51.641 "state": "completed", 00:21:51.641 "digest": "sha256", 00:21:51.641 "dhgroup": "ffdhe3072" 00:21:51.641 } 00:21:51.641 } 00:21:51.641 ]' 00:21:51.641 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.641 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:51.641 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.641 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:51.641 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.641 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.641 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.641 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.902 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:21:51.902 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:21:52.473 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.473 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:52.473 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.473 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.473 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.473 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.473 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:52.473 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:52.733 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:21:52.733 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.733 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:52.733 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:52.733 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:52.733 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.733 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.733 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.733 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.733 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.733 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.733 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.733 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.993 00:21:52.994 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.994 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.994 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.994 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.994 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.994 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.994 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.994 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.994 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.994 { 00:21:52.994 "cntlid": 19, 00:21:52.994 "qid": 0, 00:21:52.994 "state": "enabled", 00:21:52.994 "thread": "nvmf_tgt_poll_group_000", 00:21:52.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:52.994 "listen_address": { 00:21:52.994 "trtype": "TCP", 00:21:52.994 "adrfam": "IPv4", 00:21:52.994 "traddr": "10.0.0.2", 00:21:52.994 "trsvcid": "4420" 00:21:52.994 }, 00:21:52.994 "peer_address": { 00:21:52.994 "trtype": "TCP", 00:21:52.994 "adrfam": "IPv4", 00:21:52.994 "traddr": "10.0.0.1", 00:21:52.994 "trsvcid": "46132" 00:21:52.994 }, 00:21:52.994 "auth": { 00:21:52.994 "state": "completed", 00:21:52.994 "digest": "sha256", 00:21:52.994 "dhgroup": "ffdhe3072" 00:21:52.994 } 00:21:52.994 } 00:21:52.994 ]' 00:21:52.994 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.255 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:53.255 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.255 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:53.255 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.255 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.255 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.255 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.515 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:21:53.515 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:21:54.085 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.085 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:54.085 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.085 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.085 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.085 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.085 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:54.085 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:54.345 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:21:54.345 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.345 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:54.345 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:54.345 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:54.345 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.345 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.345 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.345 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.345 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.345 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.345 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.345 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:54.606 00:21:54.606 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.606 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.606 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.606 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.866 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.866 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.866 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.866 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.866 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.866 { 00:21:54.866 "cntlid": 21, 00:21:54.866 "qid": 0, 00:21:54.866 "state": "enabled", 00:21:54.866 "thread": "nvmf_tgt_poll_group_000", 00:21:54.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:54.866 "listen_address": { 00:21:54.866 "trtype": "TCP", 00:21:54.866 "adrfam": "IPv4", 00:21:54.866 "traddr": "10.0.0.2", 00:21:54.866 "trsvcid": "4420" 00:21:54.866 }, 00:21:54.866 "peer_address": { 00:21:54.866 "trtype": "TCP", 00:21:54.866 "adrfam": "IPv4", 00:21:54.866 "traddr": "10.0.0.1", 00:21:54.866 "trsvcid": "46158" 00:21:54.866 }, 00:21:54.866 "auth": { 00:21:54.866 "state": "completed", 00:21:54.866 "digest": "sha256", 00:21:54.866 "dhgroup": "ffdhe3072" 00:21:54.866 } 00:21:54.866 } 00:21:54.866 ]' 00:21:54.866 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.866 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:54.866 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.866 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:54.866 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.866 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.866 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.866 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.125 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:21:55.125 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:21:55.695 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.695 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:55.695 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.695 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.695 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.695 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.695 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:55.695 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:55.956 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:21:55.956 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.956 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:55.956 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:55.956 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:55.956 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.956 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:55.956 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.956 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.956 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.956 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:55.956 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:55.956 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:56.216 00:21:56.216 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.216 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.216 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.475 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.476 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.476 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.476 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.476 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.476 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.476 { 00:21:56.476 "cntlid": 23, 00:21:56.476 "qid": 0, 00:21:56.476 "state": "enabled", 00:21:56.476 "thread": "nvmf_tgt_poll_group_000", 00:21:56.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:56.476 "listen_address": { 00:21:56.476 "trtype": "TCP", 00:21:56.476 "adrfam": "IPv4", 00:21:56.476 "traddr": "10.0.0.2", 00:21:56.476 "trsvcid": "4420" 00:21:56.476 }, 00:21:56.476 "peer_address": { 00:21:56.476 "trtype": "TCP", 00:21:56.476 "adrfam": "IPv4", 00:21:56.476 "traddr": "10.0.0.1", 00:21:56.476 "trsvcid": "46188" 00:21:56.476 }, 00:21:56.476 "auth": { 00:21:56.476 "state": "completed", 00:21:56.476 "digest": "sha256", 00:21:56.476 "dhgroup": "ffdhe3072" 00:21:56.476 } 00:21:56.476 } 00:21:56.476 ]' 00:21:56.476 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.476 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:56.476 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.476 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:56.476 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.476 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.476 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.476 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.735 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:21:56.735 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:21:57.306 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.306 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:57.306 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.306 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.306 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.306 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:57.306 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.306 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:57.306 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:57.567 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:21:57.567 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.567 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:57.567 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:57.567 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:57.567 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.567 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.568 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.568 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.568 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.568 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.568 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.568 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.828 00:21:57.828 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.828 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.828 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.089 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.089 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.089 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.089 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.089 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.089 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.089 { 00:21:58.089 "cntlid": 25, 00:21:58.089 "qid": 0, 00:21:58.089 "state": "enabled", 00:21:58.089 "thread": "nvmf_tgt_poll_group_000", 00:21:58.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:58.089 "listen_address": { 00:21:58.089 "trtype": "TCP", 00:21:58.089 "adrfam": "IPv4", 00:21:58.089 "traddr": "10.0.0.2", 00:21:58.089 "trsvcid": "4420" 00:21:58.089 }, 00:21:58.089 "peer_address": { 00:21:58.089 "trtype": "TCP", 00:21:58.089 "adrfam": "IPv4", 00:21:58.089 "traddr": "10.0.0.1", 00:21:58.089 "trsvcid": "46206" 00:21:58.089 }, 00:21:58.089 "auth": { 00:21:58.089 "state": "completed", 00:21:58.089 "digest": "sha256", 00:21:58.089 "dhgroup": "ffdhe4096" 00:21:58.089 } 00:21:58.089 } 00:21:58.089 ]' 00:21:58.089 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.089 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:58.089 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.089 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:58.089 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.089 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.089 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.089 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.349 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:21:58.349 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:21:58.920 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.920 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:58.920 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.920 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.920 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.920 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.920 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:58.920 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:59.181 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:21:59.181 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.181 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:59.181 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:59.181 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:59.181 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.181 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.181 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.181 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.181 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.181 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.181 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.181 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.441 00:21:59.441 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.441 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.441 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.702 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.702 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.702 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.702 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.702 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.702 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.702 { 00:21:59.702 "cntlid": 27, 00:21:59.702 "qid": 0, 00:21:59.702 "state": "enabled", 00:21:59.702 "thread": "nvmf_tgt_poll_group_000", 00:21:59.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:59.702 "listen_address": { 00:21:59.702 "trtype": "TCP", 00:21:59.702 "adrfam": "IPv4", 00:21:59.702 "traddr": "10.0.0.2", 00:21:59.702 "trsvcid": "4420" 00:21:59.702 }, 00:21:59.702 "peer_address": { 00:21:59.702 "trtype": "TCP", 00:21:59.702 "adrfam": "IPv4", 00:21:59.702 "traddr": "10.0.0.1", 00:21:59.702 "trsvcid": "46236" 00:21:59.702 }, 00:21:59.702 "auth": { 00:21:59.702 "state": "completed", 00:21:59.702 "digest": "sha256", 00:21:59.702 "dhgroup": "ffdhe4096" 00:21:59.702 } 00:21:59.702 } 00:21:59.702 ]' 00:21:59.702 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.702 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:59.702 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.702 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:59.702 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.702 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.702 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.702 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.963 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:21:59.963 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:22:00.535 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.535 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:00.535 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.535 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.818 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.818 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.818 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:00.818 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:00.818 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:22:00.818 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.818 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:00.818 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:00.818 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:00.818 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.818 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.818 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.818 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.818 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.818 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.818 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:00.818 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:01.079 00:22:01.079 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.079 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.079 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.340 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.340 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.340 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.340 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.340 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.340 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.340 { 00:22:01.340 "cntlid": 29, 00:22:01.340 "qid": 0, 00:22:01.340 "state": "enabled", 00:22:01.340 "thread": "nvmf_tgt_poll_group_000", 00:22:01.340 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:01.340 "listen_address": { 00:22:01.340 "trtype": "TCP", 00:22:01.340 "adrfam": "IPv4", 00:22:01.340 "traddr": "10.0.0.2", 00:22:01.340 "trsvcid": "4420" 00:22:01.340 }, 00:22:01.340 "peer_address": { 00:22:01.340 "trtype": "TCP", 00:22:01.340 "adrfam": "IPv4", 00:22:01.340 "traddr": "10.0.0.1", 00:22:01.340 "trsvcid": "44332" 00:22:01.340 }, 00:22:01.340 "auth": { 00:22:01.340 "state": "completed", 00:22:01.340 "digest": "sha256", 00:22:01.340 "dhgroup": "ffdhe4096" 00:22:01.340 } 00:22:01.340 } 00:22:01.340 ]' 00:22:01.340 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.340 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:01.340 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.340 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:01.340 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.340 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.340 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.340 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.601 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:22:01.601 14:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:22:02.173 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.173 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:02.173 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.173 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.173 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.173 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.173 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:02.173 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:02.432 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:22:02.432 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.432 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:02.432 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:02.432 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:02.432 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.432 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:02.433 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.433 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.433 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.433 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:02.433 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.433 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.692 00:22:02.692 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.692 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.692 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.953 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.953 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.953 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.953 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.953 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.953 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.953 { 00:22:02.953 "cntlid": 31, 00:22:02.953 "qid": 0, 00:22:02.953 "state": "enabled", 00:22:02.953 "thread": "nvmf_tgt_poll_group_000", 00:22:02.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:02.953 "listen_address": { 00:22:02.953 "trtype": "TCP", 00:22:02.953 "adrfam": "IPv4", 00:22:02.953 "traddr": "10.0.0.2", 00:22:02.953 "trsvcid": "4420" 00:22:02.953 }, 00:22:02.953 "peer_address": { 00:22:02.953 "trtype": "TCP", 00:22:02.953 "adrfam": "IPv4", 00:22:02.953 "traddr": "10.0.0.1", 00:22:02.953 "trsvcid": "44362" 00:22:02.953 }, 00:22:02.953 "auth": { 00:22:02.953 "state": "completed", 00:22:02.953 "digest": "sha256", 00:22:02.953 "dhgroup": "ffdhe4096" 00:22:02.953 } 00:22:02.953 } 00:22:02.953 ]' 00:22:02.953 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.953 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:02.953 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.953 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:02.953 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.953 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.953 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.953 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.214 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:22:03.214 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:22:03.787 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.787 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:03.787 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.787 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.048 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.048 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:04.048 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:04.048 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:04.048 14:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:04.048 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:22:04.048 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:04.048 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:04.048 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:04.048 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:04.048 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.048 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.048 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.048 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.048 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.048 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.048 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.048 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:04.308 00:22:04.568 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.568 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.568 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.568 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.568 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.568 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.568 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.568 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.568 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.568 { 00:22:04.568 "cntlid": 33, 00:22:04.568 "qid": 0, 00:22:04.568 "state": "enabled", 00:22:04.568 "thread": "nvmf_tgt_poll_group_000", 00:22:04.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:04.568 "listen_address": { 00:22:04.568 "trtype": "TCP", 00:22:04.568 "adrfam": "IPv4", 00:22:04.568 "traddr": "10.0.0.2", 00:22:04.568 "trsvcid": "4420" 00:22:04.568 }, 00:22:04.568 "peer_address": { 00:22:04.568 "trtype": "TCP", 00:22:04.568 "adrfam": "IPv4", 00:22:04.568 "traddr": "10.0.0.1", 00:22:04.568 "trsvcid": "44378" 00:22:04.568 }, 00:22:04.568 "auth": { 00:22:04.568 "state": "completed", 00:22:04.568 "digest": "sha256", 00:22:04.568 "dhgroup": "ffdhe6144" 00:22:04.568 } 00:22:04.568 } 00:22:04.568 ]' 00:22:04.568 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.829 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:04.830 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.830 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:04.830 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.830 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.830 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.830 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.830 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:22:04.830 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:22:05.770 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.770 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:05.771 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.771 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.771 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.771 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.771 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:05.771 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:05.771 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:22:05.771 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.771 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:05.771 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:05.771 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:05.771 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.771 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.771 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.771 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.771 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.771 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.771 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.771 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:06.032 00:22:06.292 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.292 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.292 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.292 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.292 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.292 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.292 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.292 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.292 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.292 { 00:22:06.292 "cntlid": 35, 00:22:06.292 "qid": 0, 00:22:06.292 "state": "enabled", 00:22:06.292 "thread": "nvmf_tgt_poll_group_000", 00:22:06.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:06.292 "listen_address": { 00:22:06.292 "trtype": "TCP", 00:22:06.292 "adrfam": "IPv4", 00:22:06.292 "traddr": "10.0.0.2", 00:22:06.292 "trsvcid": "4420" 00:22:06.292 }, 00:22:06.292 "peer_address": { 00:22:06.292 "trtype": "TCP", 00:22:06.292 "adrfam": "IPv4", 00:22:06.292 "traddr": "10.0.0.1", 00:22:06.292 "trsvcid": "44408" 00:22:06.292 }, 00:22:06.292 "auth": { 00:22:06.292 "state": "completed", 00:22:06.292 "digest": "sha256", 00:22:06.292 "dhgroup": "ffdhe6144" 00:22:06.292 } 00:22:06.292 } 00:22:06.292 ]' 00:22:06.292 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.292 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:06.292 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.553 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:06.553 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.553 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.553 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.553 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.814 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:22:06.814 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:22:07.385 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.385 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:07.385 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.385 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.385 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.385 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.385 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:07.386 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:07.647 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:22:07.647 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.647 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:07.647 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:07.647 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:07.647 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.647 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.647 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.647 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.647 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.647 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.647 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.647 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.907 00:22:07.907 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.907 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.907 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.168 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.168 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.168 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.168 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.168 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.168 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.168 { 00:22:08.168 "cntlid": 37, 00:22:08.168 "qid": 0, 00:22:08.168 "state": "enabled", 00:22:08.168 "thread": "nvmf_tgt_poll_group_000", 00:22:08.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:08.168 "listen_address": { 00:22:08.168 "trtype": "TCP", 00:22:08.168 "adrfam": "IPv4", 00:22:08.168 "traddr": "10.0.0.2", 00:22:08.168 "trsvcid": "4420" 00:22:08.168 }, 00:22:08.168 "peer_address": { 00:22:08.168 "trtype": "TCP", 00:22:08.168 "adrfam": "IPv4", 00:22:08.168 "traddr": "10.0.0.1", 00:22:08.168 "trsvcid": "44430" 00:22:08.168 }, 00:22:08.168 "auth": { 00:22:08.168 "state": "completed", 00:22:08.168 "digest": "sha256", 00:22:08.168 "dhgroup": "ffdhe6144" 00:22:08.168 } 00:22:08.168 } 00:22:08.168 ]' 00:22:08.168 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.168 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:08.168 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:08.168 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:08.168 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.168 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.168 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.168 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.428 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:22:08.428 14:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:22:09.000 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.000 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:09.000 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.000 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.000 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.000 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.000 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:09.000 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:09.260 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:22:09.260 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.260 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:09.260 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:09.260 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:09.260 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.260 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:09.260 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.260 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.260 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.260 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:09.260 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.260 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:09.521 00:22:09.521 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.521 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.521 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.781 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.781 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.781 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.781 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.781 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.781 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.781 { 00:22:09.781 "cntlid": 39, 00:22:09.781 "qid": 0, 00:22:09.781 "state": "enabled", 00:22:09.781 "thread": "nvmf_tgt_poll_group_000", 00:22:09.781 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:09.781 "listen_address": { 00:22:09.781 "trtype": "TCP", 00:22:09.781 "adrfam": "IPv4", 00:22:09.781 "traddr": "10.0.0.2", 00:22:09.781 "trsvcid": "4420" 00:22:09.781 }, 00:22:09.781 "peer_address": { 00:22:09.781 "trtype": "TCP", 00:22:09.781 "adrfam": "IPv4", 00:22:09.781 "traddr": "10.0.0.1", 00:22:09.781 "trsvcid": "44464" 00:22:09.781 }, 00:22:09.781 "auth": { 00:22:09.781 "state": "completed", 00:22:09.781 "digest": "sha256", 00:22:09.781 "dhgroup": "ffdhe6144" 00:22:09.781 } 00:22:09.781 } 00:22:09.781 ]' 00:22:09.781 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.781 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:09.781 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:10.041 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:10.041 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.041 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.042 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.042 14:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.042 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:22:10.042 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:22:10.982 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.982 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:10.982 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.982 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.982 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.982 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:10.982 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:10.982 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:10.982 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:10.982 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:22:10.982 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.982 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:10.982 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:10.982 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:10.982 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.982 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.982 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.982 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.982 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.982 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.982 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.982 14:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:11.551 00:22:11.551 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.551 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.551 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.551 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.551 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.551 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.551 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.551 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.551 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.551 { 00:22:11.551 "cntlid": 41, 00:22:11.551 "qid": 0, 00:22:11.551 "state": "enabled", 00:22:11.551 "thread": "nvmf_tgt_poll_group_000", 00:22:11.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:11.551 "listen_address": { 00:22:11.551 "trtype": "TCP", 00:22:11.552 "adrfam": "IPv4", 00:22:11.552 "traddr": "10.0.0.2", 00:22:11.552 "trsvcid": "4420" 00:22:11.552 }, 00:22:11.552 "peer_address": { 00:22:11.552 "trtype": "TCP", 00:22:11.552 "adrfam": "IPv4", 00:22:11.552 "traddr": "10.0.0.1", 00:22:11.552 "trsvcid": "37850" 00:22:11.552 }, 00:22:11.552 "auth": { 00:22:11.552 "state": "completed", 00:22:11.552 "digest": "sha256", 00:22:11.552 "dhgroup": "ffdhe8192" 00:22:11.552 } 00:22:11.552 } 00:22:11.552 ]' 00:22:11.552 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.814 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:11.814 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.814 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:11.814 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.814 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.814 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.814 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.079 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:22:12.079 14:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:22:12.649 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.649 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:12.649 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.649 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.649 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.649 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.649 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:12.649 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:12.909 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:22:12.909 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:12.909 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:12.909 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:12.909 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:12.909 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.909 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.909 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.909 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.909 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.909 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.909 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.909 14:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:13.480 00:22:13.480 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.480 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:13.480 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.480 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.480 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.480 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.480 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.480 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.480 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.480 { 00:22:13.480 "cntlid": 43, 00:22:13.480 "qid": 0, 00:22:13.480 "state": "enabled", 00:22:13.480 "thread": "nvmf_tgt_poll_group_000", 00:22:13.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:13.480 "listen_address": { 00:22:13.480 "trtype": "TCP", 00:22:13.480 "adrfam": "IPv4", 00:22:13.480 "traddr": "10.0.0.2", 00:22:13.480 "trsvcid": "4420" 00:22:13.480 }, 00:22:13.480 "peer_address": { 00:22:13.480 "trtype": "TCP", 00:22:13.480 "adrfam": "IPv4", 00:22:13.480 "traddr": "10.0.0.1", 00:22:13.480 "trsvcid": "37886" 00:22:13.480 }, 00:22:13.480 "auth": { 00:22:13.480 "state": "completed", 00:22:13.480 "digest": "sha256", 00:22:13.480 "dhgroup": "ffdhe8192" 00:22:13.480 } 00:22:13.480 } 00:22:13.480 ]' 00:22:13.480 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.480 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:13.480 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.480 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:13.480 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.741 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.741 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.741 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.741 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:22:13.741 14:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:22:14.688 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.688 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:14.688 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.688 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.688 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.688 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.688 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:14.688 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:14.688 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:22:14.688 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.688 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:14.688 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:14.688 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:14.688 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.688 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.688 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.688 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.688 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.688 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.688 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.688 14:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:15.261 00:22:15.261 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.261 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.261 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.261 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.261 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.261 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.261 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.261 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.261 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:15.261 { 00:22:15.261 "cntlid": 45, 00:22:15.261 "qid": 0, 00:22:15.261 "state": "enabled", 00:22:15.261 "thread": "nvmf_tgt_poll_group_000", 00:22:15.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:15.261 "listen_address": { 00:22:15.261 "trtype": "TCP", 00:22:15.261 "adrfam": "IPv4", 00:22:15.261 "traddr": "10.0.0.2", 00:22:15.261 "trsvcid": "4420" 00:22:15.261 }, 00:22:15.261 "peer_address": { 00:22:15.261 "trtype": "TCP", 00:22:15.261 "adrfam": "IPv4", 00:22:15.261 "traddr": "10.0.0.1", 00:22:15.261 "trsvcid": "37914" 00:22:15.261 }, 00:22:15.261 "auth": { 00:22:15.261 "state": "completed", 00:22:15.261 "digest": "sha256", 00:22:15.261 "dhgroup": "ffdhe8192" 00:22:15.261 } 00:22:15.261 } 00:22:15.261 ]' 00:22:15.261 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:15.522 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:15.522 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.522 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:15.522 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.522 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.522 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.522 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.783 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:22:15.783 14:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:22:16.351 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.351 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:16.351 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.351 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.351 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.351 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:16.351 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:16.351 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:16.611 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:22:16.611 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:16.611 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:16.611 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:16.611 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:16.611 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.611 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:16.611 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.611 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.611 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.611 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:16.611 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:16.611 14:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:17.190 00:22:17.190 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:17.190 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:17.190 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.190 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.190 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.190 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.190 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.190 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.190 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:17.190 { 00:22:17.190 "cntlid": 47, 00:22:17.190 "qid": 0, 00:22:17.190 "state": "enabled", 00:22:17.190 "thread": "nvmf_tgt_poll_group_000", 00:22:17.190 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:17.190 "listen_address": { 00:22:17.190 "trtype": "TCP", 00:22:17.190 "adrfam": "IPv4", 00:22:17.190 "traddr": "10.0.0.2", 00:22:17.190 "trsvcid": "4420" 00:22:17.190 }, 00:22:17.190 "peer_address": { 00:22:17.190 "trtype": "TCP", 00:22:17.190 "adrfam": "IPv4", 00:22:17.190 "traddr": "10.0.0.1", 00:22:17.190 "trsvcid": "37932" 00:22:17.190 }, 00:22:17.190 "auth": { 00:22:17.190 "state": "completed", 00:22:17.190 "digest": "sha256", 00:22:17.190 "dhgroup": "ffdhe8192" 00:22:17.190 } 00:22:17.190 } 00:22:17.190 ]' 00:22:17.190 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:17.190 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:17.190 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:17.451 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:17.451 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:17.451 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.452 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.452 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.452 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:22:17.452 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:22:18.392 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.392 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:18.392 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.392 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.392 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.392 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:18.392 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:18.392 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:18.392 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:18.392 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:18.392 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:22:18.392 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:18.392 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:18.392 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:18.392 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:18.392 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.392 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.392 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.392 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.392 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.392 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.392 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.392 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.652 00:22:18.652 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:18.652 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.652 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.913 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.913 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.913 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.913 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.913 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.913 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.913 { 00:22:18.913 "cntlid": 49, 00:22:18.913 "qid": 0, 00:22:18.913 "state": "enabled", 00:22:18.913 "thread": "nvmf_tgt_poll_group_000", 00:22:18.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:18.913 "listen_address": { 00:22:18.913 "trtype": "TCP", 00:22:18.913 "adrfam": "IPv4", 00:22:18.913 "traddr": "10.0.0.2", 00:22:18.913 "trsvcid": "4420" 00:22:18.913 }, 00:22:18.913 "peer_address": { 00:22:18.913 "trtype": "TCP", 00:22:18.913 "adrfam": "IPv4", 00:22:18.913 "traddr": "10.0.0.1", 00:22:18.913 "trsvcid": "37968" 00:22:18.913 }, 00:22:18.913 "auth": { 00:22:18.913 "state": "completed", 00:22:18.913 "digest": "sha384", 00:22:18.913 "dhgroup": "null" 00:22:18.913 } 00:22:18.913 } 00:22:18.913 ]' 00:22:18.913 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.913 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:18.913 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.913 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:18.913 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:18.913 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.913 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.913 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.173 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:22:19.173 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:22:19.743 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.743 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:19.743 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.743 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.743 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.743 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:19.743 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:19.743 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:20.002 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:22:20.002 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:20.002 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:20.002 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:20.002 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:20.002 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.002 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.002 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.002 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.002 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.002 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.002 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.002 14:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:20.261 00:22:20.261 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:20.261 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:20.261 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.521 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.521 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.521 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.521 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.521 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.521 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:20.521 { 00:22:20.521 "cntlid": 51, 00:22:20.521 "qid": 0, 00:22:20.521 "state": "enabled", 00:22:20.521 "thread": "nvmf_tgt_poll_group_000", 00:22:20.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:20.521 "listen_address": { 00:22:20.521 "trtype": "TCP", 00:22:20.521 "adrfam": "IPv4", 00:22:20.521 "traddr": "10.0.0.2", 00:22:20.521 "trsvcid": "4420" 00:22:20.521 }, 00:22:20.521 "peer_address": { 00:22:20.521 "trtype": "TCP", 00:22:20.521 "adrfam": "IPv4", 00:22:20.521 "traddr": "10.0.0.1", 00:22:20.521 "trsvcid": "44292" 00:22:20.521 }, 00:22:20.521 "auth": { 00:22:20.521 "state": "completed", 00:22:20.521 "digest": "sha384", 00:22:20.521 "dhgroup": "null" 00:22:20.521 } 00:22:20.521 } 00:22:20.521 ]' 00:22:20.521 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:20.521 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:20.521 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:20.521 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:20.521 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:20.521 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.521 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.521 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.782 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:22:20.782 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:22:21.352 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.352 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:21.352 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.352 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.352 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.352 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:21.352 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:21.352 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:21.612 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:22:21.612 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:21.612 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:21.612 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:21.612 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:21.612 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.612 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.612 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.612 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.612 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.612 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.612 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.612 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.872 00:22:21.872 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:21.872 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:21.872 14:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.134 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.134 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.134 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.134 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.134 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.134 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:22.134 { 00:22:22.134 "cntlid": 53, 00:22:22.134 "qid": 0, 00:22:22.134 "state": "enabled", 00:22:22.134 "thread": "nvmf_tgt_poll_group_000", 00:22:22.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:22.134 "listen_address": { 00:22:22.134 "trtype": "TCP", 00:22:22.134 "adrfam": "IPv4", 00:22:22.134 "traddr": "10.0.0.2", 00:22:22.134 "trsvcid": "4420" 00:22:22.134 }, 00:22:22.134 "peer_address": { 00:22:22.134 "trtype": "TCP", 00:22:22.134 "adrfam": "IPv4", 00:22:22.134 "traddr": "10.0.0.1", 00:22:22.134 "trsvcid": "44320" 00:22:22.134 }, 00:22:22.134 "auth": { 00:22:22.134 "state": "completed", 00:22:22.134 "digest": "sha384", 00:22:22.134 "dhgroup": "null" 00:22:22.134 } 00:22:22.134 } 00:22:22.134 ]' 00:22:22.134 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:22.134 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:22.134 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:22.134 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:22.134 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:22.134 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.134 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.134 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.395 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:22:22.395 14:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:22:23.030 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.030 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:23.030 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.030 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.030 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.030 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:23.030 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:23.030 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:23.332 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:22:23.332 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:23.332 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:23.332 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:23.332 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:23.332 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.332 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:23.332 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.332 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.332 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.332 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:23.332 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:23.333 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:23.333 00:22:23.612 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:23.612 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:23.612 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.612 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.612 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.612 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.612 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.612 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.612 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:23.612 { 00:22:23.612 "cntlid": 55, 00:22:23.612 "qid": 0, 00:22:23.612 "state": "enabled", 00:22:23.612 "thread": "nvmf_tgt_poll_group_000", 00:22:23.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:23.612 "listen_address": { 00:22:23.612 "trtype": "TCP", 00:22:23.612 "adrfam": "IPv4", 00:22:23.612 "traddr": "10.0.0.2", 00:22:23.612 "trsvcid": "4420" 00:22:23.612 }, 00:22:23.612 "peer_address": { 00:22:23.612 "trtype": "TCP", 00:22:23.612 "adrfam": "IPv4", 00:22:23.612 "traddr": "10.0.0.1", 00:22:23.612 "trsvcid": "44354" 00:22:23.612 }, 00:22:23.612 "auth": { 00:22:23.612 "state": "completed", 00:22:23.612 "digest": "sha384", 00:22:23.612 "dhgroup": "null" 00:22:23.612 } 00:22:23.612 } 00:22:23.612 ]' 00:22:23.612 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:23.612 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:23.612 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.873 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:23.873 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:23.873 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.873 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.873 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.873 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:22:23.873 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:22:24.816 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.816 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:24.816 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.816 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.816 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.816 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:24.816 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:24.816 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:24.816 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:24.816 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:22:24.816 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:24.816 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:24.816 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:24.816 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:24.816 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.816 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.816 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.817 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.817 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.817 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.817 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.817 14:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.078 00:22:25.078 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:25.078 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:25.078 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.340 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.340 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.340 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.340 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.340 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.340 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:25.340 { 00:22:25.340 "cntlid": 57, 00:22:25.340 "qid": 0, 00:22:25.340 "state": "enabled", 00:22:25.340 "thread": "nvmf_tgt_poll_group_000", 00:22:25.340 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:25.340 "listen_address": { 00:22:25.340 "trtype": "TCP", 00:22:25.340 "adrfam": "IPv4", 00:22:25.340 "traddr": "10.0.0.2", 00:22:25.340 "trsvcid": "4420" 00:22:25.340 }, 00:22:25.340 "peer_address": { 00:22:25.340 "trtype": "TCP", 00:22:25.340 "adrfam": "IPv4", 00:22:25.340 "traddr": "10.0.0.1", 00:22:25.340 "trsvcid": "44376" 00:22:25.340 }, 00:22:25.340 "auth": { 00:22:25.340 "state": "completed", 00:22:25.340 "digest": "sha384", 00:22:25.340 "dhgroup": "ffdhe2048" 00:22:25.340 } 00:22:25.340 } 00:22:25.340 ]' 00:22:25.340 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:25.340 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:25.340 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:25.340 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:25.340 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:25.340 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.341 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.341 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.601 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:22:25.601 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:22:26.173 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.173 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:26.173 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.173 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.173 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.173 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:26.173 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:26.173 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:26.434 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:22:26.434 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:26.434 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:26.434 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:26.434 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:26.434 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.434 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.434 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.434 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.434 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.434 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.434 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.434 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.695 00:22:26.695 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:26.695 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:26.695 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.957 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.957 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.957 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.957 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.957 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.957 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:26.957 { 00:22:26.957 "cntlid": 59, 00:22:26.957 "qid": 0, 00:22:26.957 "state": "enabled", 00:22:26.957 "thread": "nvmf_tgt_poll_group_000", 00:22:26.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:26.957 "listen_address": { 00:22:26.957 "trtype": "TCP", 00:22:26.957 "adrfam": "IPv4", 00:22:26.957 "traddr": "10.0.0.2", 00:22:26.957 "trsvcid": "4420" 00:22:26.957 }, 00:22:26.957 "peer_address": { 00:22:26.957 "trtype": "TCP", 00:22:26.957 "adrfam": "IPv4", 00:22:26.957 "traddr": "10.0.0.1", 00:22:26.957 "trsvcid": "44412" 00:22:26.957 }, 00:22:26.957 "auth": { 00:22:26.957 "state": "completed", 00:22:26.957 "digest": "sha384", 00:22:26.957 "dhgroup": "ffdhe2048" 00:22:26.957 } 00:22:26.957 } 00:22:26.957 ]' 00:22:26.957 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:26.957 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:26.957 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:26.957 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:26.957 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:26.957 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.957 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.957 14:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.218 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:22:27.218 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:22:27.790 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.790 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:27.790 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.790 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.790 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.790 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:27.790 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:27.790 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:28.051 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:22:28.051 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:28.051 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:28.051 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:28.051 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:28.051 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.051 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.051 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.051 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.051 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.051 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.052 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.052 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.314 00:22:28.314 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:28.314 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:28.314 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.574 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.574 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.574 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.574 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.574 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.574 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:28.574 { 00:22:28.574 "cntlid": 61, 00:22:28.574 "qid": 0, 00:22:28.574 "state": "enabled", 00:22:28.574 "thread": "nvmf_tgt_poll_group_000", 00:22:28.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:28.574 "listen_address": { 00:22:28.574 "trtype": "TCP", 00:22:28.574 "adrfam": "IPv4", 00:22:28.574 "traddr": "10.0.0.2", 00:22:28.574 "trsvcid": "4420" 00:22:28.574 }, 00:22:28.574 "peer_address": { 00:22:28.574 "trtype": "TCP", 00:22:28.574 "adrfam": "IPv4", 00:22:28.574 "traddr": "10.0.0.1", 00:22:28.574 "trsvcid": "44442" 00:22:28.574 }, 00:22:28.574 "auth": { 00:22:28.574 "state": "completed", 00:22:28.574 "digest": "sha384", 00:22:28.574 "dhgroup": "ffdhe2048" 00:22:28.574 } 00:22:28.574 } 00:22:28.574 ]' 00:22:28.574 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:28.574 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:28.574 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:28.574 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:28.574 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:28.574 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.574 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.574 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.834 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:22:28.834 14:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:22:29.405 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.405 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:29.405 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.405 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.405 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.405 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:29.405 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:29.405 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:29.666 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:22:29.666 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:29.666 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:29.666 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:29.666 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:29.666 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:29.666 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:29.666 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.666 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.666 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.666 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:29.666 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:29.666 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:29.926 00:22:29.926 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:29.926 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:29.926 14:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.188 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.188 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.188 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.188 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.188 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.188 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:30.189 { 00:22:30.189 "cntlid": 63, 00:22:30.189 "qid": 0, 00:22:30.189 "state": "enabled", 00:22:30.189 "thread": "nvmf_tgt_poll_group_000", 00:22:30.189 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:30.189 "listen_address": { 00:22:30.189 "trtype": "TCP", 00:22:30.189 "adrfam": "IPv4", 00:22:30.189 "traddr": "10.0.0.2", 00:22:30.189 "trsvcid": "4420" 00:22:30.189 }, 00:22:30.189 "peer_address": { 00:22:30.189 "trtype": "TCP", 00:22:30.189 "adrfam": "IPv4", 00:22:30.189 "traddr": "10.0.0.1", 00:22:30.189 "trsvcid": "55736" 00:22:30.189 }, 00:22:30.189 "auth": { 00:22:30.189 "state": "completed", 00:22:30.189 "digest": "sha384", 00:22:30.189 "dhgroup": "ffdhe2048" 00:22:30.189 } 00:22:30.189 } 00:22:30.189 ]' 00:22:30.189 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:30.189 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:30.189 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:30.189 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:30.189 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:30.189 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.189 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.189 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.449 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:22:30.449 14:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:22:31.020 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.020 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:31.020 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.020 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.020 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.020 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:31.020 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:31.020 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:31.020 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:31.280 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:22:31.280 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:31.280 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:31.280 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:31.280 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:31.280 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.280 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:31.280 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.280 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.280 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.280 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:31.280 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:31.280 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:31.541 00:22:31.541 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:31.541 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:31.541 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.802 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.802 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.802 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.802 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.802 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.802 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:31.802 { 00:22:31.802 "cntlid": 65, 00:22:31.802 "qid": 0, 00:22:31.802 "state": "enabled", 00:22:31.802 "thread": "nvmf_tgt_poll_group_000", 00:22:31.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:31.802 "listen_address": { 00:22:31.802 "trtype": "TCP", 00:22:31.802 "adrfam": "IPv4", 00:22:31.802 "traddr": "10.0.0.2", 00:22:31.802 "trsvcid": "4420" 00:22:31.802 }, 00:22:31.802 "peer_address": { 00:22:31.802 "trtype": "TCP", 00:22:31.802 "adrfam": "IPv4", 00:22:31.802 "traddr": "10.0.0.1", 00:22:31.802 "trsvcid": "55760" 00:22:31.802 }, 00:22:31.802 "auth": { 00:22:31.802 "state": "completed", 00:22:31.802 "digest": "sha384", 00:22:31.802 "dhgroup": "ffdhe3072" 00:22:31.802 } 00:22:31.802 } 00:22:31.802 ]' 00:22:31.802 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:31.802 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:31.802 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:31.802 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:31.802 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:31.802 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.802 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.802 14:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.062 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:22:32.062 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:22:32.632 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.891 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:32.891 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.891 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.891 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.891 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:32.891 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:32.891 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:32.891 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:22:32.891 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:32.891 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:32.891 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:32.891 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:32.891 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.891 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.891 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.891 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.891 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.891 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.891 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:32.891 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.150 00:22:33.150 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:33.150 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.150 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:33.410 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.410 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.411 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.411 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.411 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.411 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:33.411 { 00:22:33.411 "cntlid": 67, 00:22:33.411 "qid": 0, 00:22:33.411 "state": "enabled", 00:22:33.411 "thread": "nvmf_tgt_poll_group_000", 00:22:33.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:33.411 "listen_address": { 00:22:33.411 "trtype": "TCP", 00:22:33.411 "adrfam": "IPv4", 00:22:33.411 "traddr": "10.0.0.2", 00:22:33.411 "trsvcid": "4420" 00:22:33.411 }, 00:22:33.411 "peer_address": { 00:22:33.411 "trtype": "TCP", 00:22:33.411 "adrfam": "IPv4", 00:22:33.411 "traddr": "10.0.0.1", 00:22:33.411 "trsvcid": "55804" 00:22:33.411 }, 00:22:33.411 "auth": { 00:22:33.411 "state": "completed", 00:22:33.411 "digest": "sha384", 00:22:33.411 "dhgroup": "ffdhe3072" 00:22:33.411 } 00:22:33.411 } 00:22:33.411 ]' 00:22:33.411 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:33.411 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:33.411 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:33.411 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:33.411 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:33.411 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.411 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.411 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.671 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:22:33.671 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:22:34.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:34.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:34.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:34.242 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:34.508 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:22:34.508 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:34.508 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:34.508 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:34.508 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:34.508 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.508 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.508 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.508 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.508 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.508 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.508 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.508 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:34.768 00:22:34.768 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:34.768 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:34.768 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.029 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.029 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.029 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.029 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.029 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.029 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:35.029 { 00:22:35.029 "cntlid": 69, 00:22:35.029 "qid": 0, 00:22:35.029 "state": "enabled", 00:22:35.029 "thread": "nvmf_tgt_poll_group_000", 00:22:35.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:35.029 "listen_address": { 00:22:35.029 "trtype": "TCP", 00:22:35.029 "adrfam": "IPv4", 00:22:35.029 "traddr": "10.0.0.2", 00:22:35.029 "trsvcid": "4420" 00:22:35.029 }, 00:22:35.029 "peer_address": { 00:22:35.029 "trtype": "TCP", 00:22:35.029 "adrfam": "IPv4", 00:22:35.029 "traddr": "10.0.0.1", 00:22:35.029 "trsvcid": "55840" 00:22:35.030 }, 00:22:35.030 "auth": { 00:22:35.030 "state": "completed", 00:22:35.030 "digest": "sha384", 00:22:35.030 "dhgroup": "ffdhe3072" 00:22:35.030 } 00:22:35.030 } 00:22:35.030 ]' 00:22:35.030 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:35.030 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:35.030 14:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:35.030 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:35.030 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:35.030 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.030 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.030 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.290 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:22:35.290 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:22:35.862 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.862 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:35.862 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.862 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.862 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.862 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:35.862 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:35.862 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:36.122 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:22:36.122 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.122 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:36.122 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:36.122 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:36.122 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.122 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:36.122 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.122 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.122 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.122 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:36.122 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:36.122 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:36.382 00:22:36.382 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:36.382 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:36.382 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:36.642 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.642 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:36.642 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.642 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.642 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.642 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:36.642 { 00:22:36.642 "cntlid": 71, 00:22:36.642 "qid": 0, 00:22:36.642 "state": "enabled", 00:22:36.642 "thread": "nvmf_tgt_poll_group_000", 00:22:36.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:36.643 "listen_address": { 00:22:36.643 "trtype": "TCP", 00:22:36.643 "adrfam": "IPv4", 00:22:36.643 "traddr": "10.0.0.2", 00:22:36.643 "trsvcid": "4420" 00:22:36.643 }, 00:22:36.643 "peer_address": { 00:22:36.643 "trtype": "TCP", 00:22:36.643 "adrfam": "IPv4", 00:22:36.643 "traddr": "10.0.0.1", 00:22:36.643 "trsvcid": "55878" 00:22:36.643 }, 00:22:36.643 "auth": { 00:22:36.643 "state": "completed", 00:22:36.643 "digest": "sha384", 00:22:36.643 "dhgroup": "ffdhe3072" 00:22:36.643 } 00:22:36.643 } 00:22:36.643 ]' 00:22:36.643 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:36.643 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:36.643 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:36.643 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:36.643 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:36.643 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:36.643 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:36.643 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.903 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:22:36.903 14:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:22:37.475 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:37.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:37.475 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:37.475 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.475 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.475 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.475 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:37.475 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:37.475 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:37.475 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:37.735 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:22:37.735 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:37.735 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:37.735 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:37.735 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:37.735 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:37.735 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.735 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.735 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.735 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.735 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.735 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.735 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:37.996 00:22:37.996 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:37.996 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:37.996 14:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.257 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.257 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.257 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.257 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.257 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.257 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:38.257 { 00:22:38.257 "cntlid": 73, 00:22:38.257 "qid": 0, 00:22:38.257 "state": "enabled", 00:22:38.257 "thread": "nvmf_tgt_poll_group_000", 00:22:38.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:38.257 "listen_address": { 00:22:38.257 "trtype": "TCP", 00:22:38.257 "adrfam": "IPv4", 00:22:38.257 "traddr": "10.0.0.2", 00:22:38.257 "trsvcid": "4420" 00:22:38.257 }, 00:22:38.257 "peer_address": { 00:22:38.257 "trtype": "TCP", 00:22:38.257 "adrfam": "IPv4", 00:22:38.257 "traddr": "10.0.0.1", 00:22:38.257 "trsvcid": "55902" 00:22:38.257 }, 00:22:38.257 "auth": { 00:22:38.257 "state": "completed", 00:22:38.257 "digest": "sha384", 00:22:38.257 "dhgroup": "ffdhe4096" 00:22:38.257 } 00:22:38.257 } 00:22:38.257 ]' 00:22:38.257 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:38.257 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:38.257 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:38.257 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:38.257 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:38.257 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.257 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.257 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:38.518 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:22:38.518 14:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:22:39.088 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.088 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:39.088 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.088 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.088 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.088 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:39.089 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:39.089 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:39.349 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:22:39.349 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:39.349 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:39.349 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:39.349 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:39.349 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.349 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.349 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.349 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.349 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.349 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.349 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.349 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:39.610 00:22:39.610 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:39.610 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:39.610 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.870 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.870 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.870 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.870 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.870 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.870 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:39.870 { 00:22:39.870 "cntlid": 75, 00:22:39.870 "qid": 0, 00:22:39.870 "state": "enabled", 00:22:39.870 "thread": "nvmf_tgt_poll_group_000", 00:22:39.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:39.870 "listen_address": { 00:22:39.870 "trtype": "TCP", 00:22:39.870 "adrfam": "IPv4", 00:22:39.870 "traddr": "10.0.0.2", 00:22:39.870 "trsvcid": "4420" 00:22:39.870 }, 00:22:39.870 "peer_address": { 00:22:39.870 "trtype": "TCP", 00:22:39.870 "adrfam": "IPv4", 00:22:39.870 "traddr": "10.0.0.1", 00:22:39.870 "trsvcid": "55924" 00:22:39.870 }, 00:22:39.870 "auth": { 00:22:39.870 "state": "completed", 00:22:39.870 "digest": "sha384", 00:22:39.870 "dhgroup": "ffdhe4096" 00:22:39.870 } 00:22:39.870 } 00:22:39.870 ]' 00:22:39.870 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:39.870 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:39.870 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:39.870 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:39.870 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:39.870 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.870 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.870 14:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.132 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:22:40.132 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:22:40.702 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.702 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:40.702 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.702 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.702 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.702 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:40.702 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:40.702 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:40.962 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:22:40.962 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:40.962 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:40.962 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:40.962 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:40.962 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.962 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.962 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.962 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.962 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.962 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.962 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:40.962 14:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.223 00:22:41.223 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:41.223 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:41.223 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.483 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.483 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.483 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.483 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.483 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.483 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:41.483 { 00:22:41.483 "cntlid": 77, 00:22:41.483 "qid": 0, 00:22:41.483 "state": "enabled", 00:22:41.483 "thread": "nvmf_tgt_poll_group_000", 00:22:41.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:41.483 "listen_address": { 00:22:41.483 "trtype": "TCP", 00:22:41.483 "adrfam": "IPv4", 00:22:41.483 "traddr": "10.0.0.2", 00:22:41.483 "trsvcid": "4420" 00:22:41.483 }, 00:22:41.483 "peer_address": { 00:22:41.483 "trtype": "TCP", 00:22:41.483 "adrfam": "IPv4", 00:22:41.483 "traddr": "10.0.0.1", 00:22:41.483 "trsvcid": "43194" 00:22:41.483 }, 00:22:41.483 "auth": { 00:22:41.483 "state": "completed", 00:22:41.483 "digest": "sha384", 00:22:41.483 "dhgroup": "ffdhe4096" 00:22:41.483 } 00:22:41.483 } 00:22:41.483 ]' 00:22:41.483 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:41.483 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:41.483 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:41.483 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:41.483 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:41.483 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.483 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.483 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.743 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:22:41.744 14:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:22:42.314 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.314 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:42.314 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.314 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.574 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.574 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:42.574 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:42.574 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:42.574 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:22:42.574 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:42.574 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:42.574 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:42.574 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:42.574 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.574 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:42.574 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.574 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.574 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.574 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:42.574 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:42.574 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:42.834 00:22:42.834 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:42.834 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:42.834 14:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.094 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.094 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.094 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.094 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.094 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.094 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:43.094 { 00:22:43.094 "cntlid": 79, 00:22:43.094 "qid": 0, 00:22:43.094 "state": "enabled", 00:22:43.094 "thread": "nvmf_tgt_poll_group_000", 00:22:43.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:43.094 "listen_address": { 00:22:43.094 "trtype": "TCP", 00:22:43.094 "adrfam": "IPv4", 00:22:43.094 "traddr": "10.0.0.2", 00:22:43.094 "trsvcid": "4420" 00:22:43.094 }, 00:22:43.094 "peer_address": { 00:22:43.094 "trtype": "TCP", 00:22:43.094 "adrfam": "IPv4", 00:22:43.094 "traddr": "10.0.0.1", 00:22:43.094 "trsvcid": "43220" 00:22:43.094 }, 00:22:43.094 "auth": { 00:22:43.094 "state": "completed", 00:22:43.094 "digest": "sha384", 00:22:43.094 "dhgroup": "ffdhe4096" 00:22:43.094 } 00:22:43.094 } 00:22:43.094 ]' 00:22:43.094 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:43.095 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:43.095 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:43.095 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:43.095 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:43.355 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.355 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.355 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.355 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:22:43.355 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:22:44.305 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.305 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:44.305 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.305 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.305 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.305 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:44.305 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:44.305 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:44.305 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:44.305 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:22:44.305 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:44.305 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:44.305 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:44.305 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:44.305 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.305 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.305 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.305 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.305 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.305 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.305 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.305 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.571 00:22:44.571 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:44.571 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:44.571 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.831 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.831 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:44.831 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.831 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.831 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.831 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:44.832 { 00:22:44.832 "cntlid": 81, 00:22:44.832 "qid": 0, 00:22:44.832 "state": "enabled", 00:22:44.832 "thread": "nvmf_tgt_poll_group_000", 00:22:44.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:44.832 "listen_address": { 00:22:44.832 "trtype": "TCP", 00:22:44.832 "adrfam": "IPv4", 00:22:44.832 "traddr": "10.0.0.2", 00:22:44.832 "trsvcid": "4420" 00:22:44.832 }, 00:22:44.832 "peer_address": { 00:22:44.832 "trtype": "TCP", 00:22:44.832 "adrfam": "IPv4", 00:22:44.832 "traddr": "10.0.0.1", 00:22:44.832 "trsvcid": "43234" 00:22:44.832 }, 00:22:44.832 "auth": { 00:22:44.832 "state": "completed", 00:22:44.832 "digest": "sha384", 00:22:44.832 "dhgroup": "ffdhe6144" 00:22:44.832 } 00:22:44.832 } 00:22:44.832 ]' 00:22:44.832 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:44.832 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:44.832 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:44.832 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:44.832 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:45.092 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.092 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.092 14:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.092 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:22:45.092 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:22:46.032 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.032 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:46.033 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.033 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.033 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.033 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:46.033 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:46.033 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:46.033 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:22:46.033 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:46.033 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:46.033 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:46.033 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:46.033 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.033 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.033 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.033 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.033 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.033 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.033 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.033 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.292 00:22:46.292 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:46.292 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:46.292 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.552 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.552 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.552 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.552 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.552 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.552 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:46.552 { 00:22:46.552 "cntlid": 83, 00:22:46.552 "qid": 0, 00:22:46.552 "state": "enabled", 00:22:46.552 "thread": "nvmf_tgt_poll_group_000", 00:22:46.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:46.552 "listen_address": { 00:22:46.552 "trtype": "TCP", 00:22:46.552 "adrfam": "IPv4", 00:22:46.552 "traddr": "10.0.0.2", 00:22:46.552 "trsvcid": "4420" 00:22:46.552 }, 00:22:46.552 "peer_address": { 00:22:46.552 "trtype": "TCP", 00:22:46.552 "adrfam": "IPv4", 00:22:46.552 "traddr": "10.0.0.1", 00:22:46.552 "trsvcid": "43260" 00:22:46.552 }, 00:22:46.552 "auth": { 00:22:46.552 "state": "completed", 00:22:46.552 "digest": "sha384", 00:22:46.552 "dhgroup": "ffdhe6144" 00:22:46.552 } 00:22:46.552 } 00:22:46.552 ]' 00:22:46.552 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:46.552 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:46.552 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:46.552 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:46.552 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:46.812 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.812 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.812 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.812 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:22:46.812 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:22:47.751 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.751 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:47.751 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.751 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.751 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.751 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:47.751 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:47.751 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:47.751 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:22:47.751 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:47.751 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:47.751 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:47.751 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:47.751 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.751 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.751 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.751 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.751 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.751 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.751 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:47.751 14:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:48.009 00:22:48.009 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:48.009 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:48.009 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.269 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.269 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.269 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.269 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.269 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.269 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:48.269 { 00:22:48.269 "cntlid": 85, 00:22:48.269 "qid": 0, 00:22:48.269 "state": "enabled", 00:22:48.269 "thread": "nvmf_tgt_poll_group_000", 00:22:48.269 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:48.269 "listen_address": { 00:22:48.269 "trtype": "TCP", 00:22:48.269 "adrfam": "IPv4", 00:22:48.269 "traddr": "10.0.0.2", 00:22:48.269 "trsvcid": "4420" 00:22:48.269 }, 00:22:48.269 "peer_address": { 00:22:48.269 "trtype": "TCP", 00:22:48.269 "adrfam": "IPv4", 00:22:48.269 "traddr": "10.0.0.1", 00:22:48.269 "trsvcid": "43282" 00:22:48.269 }, 00:22:48.269 "auth": { 00:22:48.269 "state": "completed", 00:22:48.269 "digest": "sha384", 00:22:48.269 "dhgroup": "ffdhe6144" 00:22:48.269 } 00:22:48.269 } 00:22:48.269 ]' 00:22:48.269 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:48.269 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:48.269 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:48.269 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:48.269 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:48.529 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.529 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.529 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.529 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:22:48.529 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:22:49.471 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:49.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.471 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:49.471 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.471 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.471 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.471 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:49.471 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:49.471 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:49.471 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:22:49.471 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:49.471 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:49.471 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:49.471 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:49.471 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.471 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:49.471 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.471 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.471 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.471 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:49.471 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:49.471 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:49.731 00:22:49.731 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:49.731 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:49.731 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.991 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.991 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.992 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.992 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.992 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.992 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:49.992 { 00:22:49.992 "cntlid": 87, 00:22:49.992 "qid": 0, 00:22:49.992 "state": "enabled", 00:22:49.992 "thread": "nvmf_tgt_poll_group_000", 00:22:49.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:49.992 "listen_address": { 00:22:49.992 "trtype": "TCP", 00:22:49.992 "adrfam": "IPv4", 00:22:49.992 "traddr": "10.0.0.2", 00:22:49.992 "trsvcid": "4420" 00:22:49.992 }, 00:22:49.992 "peer_address": { 00:22:49.992 "trtype": "TCP", 00:22:49.992 "adrfam": "IPv4", 00:22:49.992 "traddr": "10.0.0.1", 00:22:49.992 "trsvcid": "43316" 00:22:49.992 }, 00:22:49.992 "auth": { 00:22:49.992 "state": "completed", 00:22:49.992 "digest": "sha384", 00:22:49.992 "dhgroup": "ffdhe6144" 00:22:49.992 } 00:22:49.992 } 00:22:49.992 ]' 00:22:49.992 14:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:49.992 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:49.992 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:49.992 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:49.992 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:50.252 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.252 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.252 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.252 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:22:50.252 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:22:50.823 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:51.083 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:51.083 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:51.083 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.083 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.083 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.083 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:51.083 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:51.083 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:51.083 14:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:51.083 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:22:51.083 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:51.083 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:51.083 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:51.083 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:51.083 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:51.083 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.083 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.083 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.083 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.083 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.084 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.084 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:51.654 00:22:51.654 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:51.654 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:51.654 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.914 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.914 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.914 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.914 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.914 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.914 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:51.914 { 00:22:51.914 "cntlid": 89, 00:22:51.914 "qid": 0, 00:22:51.914 "state": "enabled", 00:22:51.914 "thread": "nvmf_tgt_poll_group_000", 00:22:51.914 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:51.914 "listen_address": { 00:22:51.914 "trtype": "TCP", 00:22:51.914 "adrfam": "IPv4", 00:22:51.914 "traddr": "10.0.0.2", 00:22:51.914 "trsvcid": "4420" 00:22:51.914 }, 00:22:51.914 "peer_address": { 00:22:51.914 "trtype": "TCP", 00:22:51.914 "adrfam": "IPv4", 00:22:51.914 "traddr": "10.0.0.1", 00:22:51.914 "trsvcid": "55008" 00:22:51.914 }, 00:22:51.914 "auth": { 00:22:51.914 "state": "completed", 00:22:51.914 "digest": "sha384", 00:22:51.914 "dhgroup": "ffdhe8192" 00:22:51.914 } 00:22:51.914 } 00:22:51.914 ]' 00:22:51.914 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:51.914 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:51.914 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:51.914 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:51.914 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:51.914 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.914 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.914 14:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.175 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:22:52.175 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:22:52.745 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.745 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:52.745 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.745 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.745 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.745 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:52.745 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:52.745 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:53.006 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:22:53.006 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:53.006 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:53.006 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:53.006 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:53.006 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:53.006 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:53.006 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.006 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.006 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.006 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:53.006 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:53.006 14:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:53.577 00:22:53.577 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:53.577 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:53.577 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.577 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.577 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.577 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.577 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.577 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.577 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:53.577 { 00:22:53.577 "cntlid": 91, 00:22:53.577 "qid": 0, 00:22:53.577 "state": "enabled", 00:22:53.577 "thread": "nvmf_tgt_poll_group_000", 00:22:53.577 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:53.577 "listen_address": { 00:22:53.577 "trtype": "TCP", 00:22:53.577 "adrfam": "IPv4", 00:22:53.577 "traddr": "10.0.0.2", 00:22:53.577 "trsvcid": "4420" 00:22:53.577 }, 00:22:53.577 "peer_address": { 00:22:53.577 "trtype": "TCP", 00:22:53.577 "adrfam": "IPv4", 00:22:53.577 "traddr": "10.0.0.1", 00:22:53.577 "trsvcid": "55040" 00:22:53.577 }, 00:22:53.577 "auth": { 00:22:53.577 "state": "completed", 00:22:53.577 "digest": "sha384", 00:22:53.577 "dhgroup": "ffdhe8192" 00:22:53.577 } 00:22:53.577 } 00:22:53.577 ]' 00:22:53.577 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:53.838 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:53.838 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:53.838 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:53.838 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:53.838 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.838 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.838 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.098 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:22:54.098 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:22:54.668 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.668 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:54.668 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.668 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.668 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.668 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:54.668 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:54.668 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:54.928 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:22:54.928 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:54.928 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:54.928 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:54.928 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:54.928 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.928 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.928 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.928 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.928 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.928 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.928 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.928 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:55.498 00:22:55.498 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:55.498 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:55.498 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.498 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.498 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.498 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.498 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.498 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.498 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:55.498 { 00:22:55.498 "cntlid": 93, 00:22:55.498 "qid": 0, 00:22:55.498 "state": "enabled", 00:22:55.498 "thread": "nvmf_tgt_poll_group_000", 00:22:55.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:55.498 "listen_address": { 00:22:55.498 "trtype": "TCP", 00:22:55.498 "adrfam": "IPv4", 00:22:55.498 "traddr": "10.0.0.2", 00:22:55.498 "trsvcid": "4420" 00:22:55.498 }, 00:22:55.498 "peer_address": { 00:22:55.498 "trtype": "TCP", 00:22:55.498 "adrfam": "IPv4", 00:22:55.498 "traddr": "10.0.0.1", 00:22:55.498 "trsvcid": "55070" 00:22:55.498 }, 00:22:55.498 "auth": { 00:22:55.498 "state": "completed", 00:22:55.498 "digest": "sha384", 00:22:55.498 "dhgroup": "ffdhe8192" 00:22:55.498 } 00:22:55.498 } 00:22:55.498 ]' 00:22:55.498 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:55.498 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:55.498 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:55.498 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:55.498 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:55.758 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:55.758 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.758 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.758 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:22:55.758 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:22:56.701 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:56.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:56.701 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:56.701 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.701 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.701 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.701 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:56.701 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:56.701 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:56.701 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:22:56.701 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:56.701 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:56.701 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:56.701 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:56.701 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:56.701 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:56.701 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.701 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.701 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.701 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:56.701 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:56.701 14:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:57.272 00:22:57.272 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:57.272 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.272 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:57.272 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.272 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:57.272 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.272 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.272 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.272 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:57.272 { 00:22:57.272 "cntlid": 95, 00:22:57.272 "qid": 0, 00:22:57.272 "state": "enabled", 00:22:57.272 "thread": "nvmf_tgt_poll_group_000", 00:22:57.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:57.272 "listen_address": { 00:22:57.272 "trtype": "TCP", 00:22:57.272 "adrfam": "IPv4", 00:22:57.272 "traddr": "10.0.0.2", 00:22:57.272 "trsvcid": "4420" 00:22:57.272 }, 00:22:57.272 "peer_address": { 00:22:57.272 "trtype": "TCP", 00:22:57.272 "adrfam": "IPv4", 00:22:57.272 "traddr": "10.0.0.1", 00:22:57.272 "trsvcid": "55098" 00:22:57.272 }, 00:22:57.272 "auth": { 00:22:57.272 "state": "completed", 00:22:57.272 "digest": "sha384", 00:22:57.272 "dhgroup": "ffdhe8192" 00:22:57.272 } 00:22:57.272 } 00:22:57.272 ]' 00:22:57.272 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:57.532 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:57.532 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:57.532 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:57.532 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:57.532 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:57.532 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:57.532 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:57.791 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:22:57.791 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:22:58.361 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.361 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:58.361 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.361 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.361 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.361 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:58.361 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:58.361 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:58.361 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:58.361 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:58.622 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:22:58.622 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:58.622 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:58.622 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:58.622 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:58.622 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:58.622 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:58.622 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.622 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.622 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.622 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:58.622 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:58.622 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:58.884 00:22:58.884 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:58.884 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:58.884 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:58.884 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.884 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:58.884 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.884 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.884 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.884 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:58.884 { 00:22:58.884 "cntlid": 97, 00:22:58.884 "qid": 0, 00:22:58.884 "state": "enabled", 00:22:58.884 "thread": "nvmf_tgt_poll_group_000", 00:22:58.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:58.884 "listen_address": { 00:22:58.884 "trtype": "TCP", 00:22:58.884 "adrfam": "IPv4", 00:22:58.884 "traddr": "10.0.0.2", 00:22:58.884 "trsvcid": "4420" 00:22:58.884 }, 00:22:58.884 "peer_address": { 00:22:58.884 "trtype": "TCP", 00:22:58.884 "adrfam": "IPv4", 00:22:58.884 "traddr": "10.0.0.1", 00:22:58.884 "trsvcid": "55130" 00:22:58.884 }, 00:22:58.884 "auth": { 00:22:58.884 "state": "completed", 00:22:58.884 "digest": "sha512", 00:22:58.884 "dhgroup": "null" 00:22:58.884 } 00:22:58.884 } 00:22:58.884 ]' 00:22:58.884 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:59.145 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:59.145 14:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:59.145 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:59.145 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:59.145 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.145 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.145 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.406 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:22:59.406 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:22:59.976 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:59.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:59.977 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:59.977 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.977 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.977 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.977 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:59.977 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:59.977 14:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:00.237 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:23:00.237 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:00.237 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:00.237 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:00.237 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:00.237 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.237 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:00.237 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.237 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.237 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.237 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:00.237 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:00.237 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:00.237 00:23:00.497 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:00.498 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:00.498 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.498 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.498 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:00.498 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.498 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.498 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.498 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:00.498 { 00:23:00.498 "cntlid": 99, 00:23:00.498 "qid": 0, 00:23:00.498 "state": "enabled", 00:23:00.498 "thread": "nvmf_tgt_poll_group_000", 00:23:00.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:00.498 "listen_address": { 00:23:00.498 "trtype": "TCP", 00:23:00.498 "adrfam": "IPv4", 00:23:00.498 "traddr": "10.0.0.2", 00:23:00.498 "trsvcid": "4420" 00:23:00.498 }, 00:23:00.498 "peer_address": { 00:23:00.498 "trtype": "TCP", 00:23:00.498 "adrfam": "IPv4", 00:23:00.498 "traddr": "10.0.0.1", 00:23:00.498 "trsvcid": "34882" 00:23:00.498 }, 00:23:00.498 "auth": { 00:23:00.498 "state": "completed", 00:23:00.498 "digest": "sha512", 00:23:00.498 "dhgroup": "null" 00:23:00.498 } 00:23:00.498 } 00:23:00.498 ]' 00:23:00.498 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:00.498 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:00.498 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:00.759 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:00.759 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:00.759 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:00.759 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:00.759 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.020 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:23:01.020 14:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:23:01.650 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:01.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:01.650 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:01.650 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.650 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.650 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.650 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:01.650 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:01.650 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:01.650 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:23:01.650 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:01.650 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:01.650 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:01.650 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:01.650 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:01.650 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.650 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.650 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.650 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.650 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.650 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.650 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.962 00:23:01.962 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:01.962 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:01.962 14:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.226 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.226 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.226 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.227 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.227 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.227 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:02.227 { 00:23:02.227 "cntlid": 101, 00:23:02.227 "qid": 0, 00:23:02.227 "state": "enabled", 00:23:02.227 "thread": "nvmf_tgt_poll_group_000", 00:23:02.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:02.227 "listen_address": { 00:23:02.227 "trtype": "TCP", 00:23:02.227 "adrfam": "IPv4", 00:23:02.227 "traddr": "10.0.0.2", 00:23:02.227 "trsvcid": "4420" 00:23:02.227 }, 00:23:02.227 "peer_address": { 00:23:02.227 "trtype": "TCP", 00:23:02.227 "adrfam": "IPv4", 00:23:02.227 "traddr": "10.0.0.1", 00:23:02.227 "trsvcid": "34914" 00:23:02.227 }, 00:23:02.227 "auth": { 00:23:02.227 "state": "completed", 00:23:02.227 "digest": "sha512", 00:23:02.227 "dhgroup": "null" 00:23:02.227 } 00:23:02.227 } 00:23:02.227 ]' 00:23:02.227 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:02.227 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:02.227 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:02.227 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:02.227 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:02.227 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.227 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.227 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.488 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:23:02.488 14:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:23:03.060 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.060 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:03.060 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.060 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.060 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.060 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:03.060 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:03.060 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:03.320 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:23:03.320 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:03.320 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:03.320 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:03.320 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:03.320 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:03.321 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:03.321 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.321 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.321 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.321 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:03.321 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:03.321 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:03.582 00:23:03.582 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:03.582 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:03.582 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:03.842 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.842 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:03.842 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.842 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.842 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.842 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:03.842 { 00:23:03.842 "cntlid": 103, 00:23:03.842 "qid": 0, 00:23:03.842 "state": "enabled", 00:23:03.842 "thread": "nvmf_tgt_poll_group_000", 00:23:03.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:03.842 "listen_address": { 00:23:03.842 "trtype": "TCP", 00:23:03.842 "adrfam": "IPv4", 00:23:03.842 "traddr": "10.0.0.2", 00:23:03.842 "trsvcid": "4420" 00:23:03.842 }, 00:23:03.842 "peer_address": { 00:23:03.842 "trtype": "TCP", 00:23:03.842 "adrfam": "IPv4", 00:23:03.842 "traddr": "10.0.0.1", 00:23:03.842 "trsvcid": "34932" 00:23:03.842 }, 00:23:03.842 "auth": { 00:23:03.842 "state": "completed", 00:23:03.842 "digest": "sha512", 00:23:03.842 "dhgroup": "null" 00:23:03.842 } 00:23:03.842 } 00:23:03.842 ]' 00:23:03.842 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:03.842 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:03.842 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:03.842 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:03.842 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:03.842 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:03.842 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:03.842 14:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.102 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:23:04.102 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:23:04.678 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:04.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:04.678 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:04.678 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.678 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.678 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.678 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:04.678 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:04.678 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:04.678 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:04.939 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:23:04.939 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:04.939 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:04.939 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:04.939 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:04.939 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:04.939 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:04.939 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.939 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.939 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.939 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:04.939 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:04.939 14:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:05.200 00:23:05.200 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:05.200 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:05.200 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:05.200 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.460 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:05.460 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.460 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.460 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.460 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:05.460 { 00:23:05.460 "cntlid": 105, 00:23:05.460 "qid": 0, 00:23:05.460 "state": "enabled", 00:23:05.460 "thread": "nvmf_tgt_poll_group_000", 00:23:05.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:05.460 "listen_address": { 00:23:05.460 "trtype": "TCP", 00:23:05.460 "adrfam": "IPv4", 00:23:05.460 "traddr": "10.0.0.2", 00:23:05.460 "trsvcid": "4420" 00:23:05.460 }, 00:23:05.460 "peer_address": { 00:23:05.460 "trtype": "TCP", 00:23:05.460 "adrfam": "IPv4", 00:23:05.460 "traddr": "10.0.0.1", 00:23:05.460 "trsvcid": "34962" 00:23:05.460 }, 00:23:05.460 "auth": { 00:23:05.460 "state": "completed", 00:23:05.460 "digest": "sha512", 00:23:05.460 "dhgroup": "ffdhe2048" 00:23:05.460 } 00:23:05.460 } 00:23:05.460 ]' 00:23:05.460 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:05.460 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:05.460 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:05.460 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:05.460 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:05.460 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:05.460 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:05.460 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:05.722 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:23:05.722 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:23:06.294 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:06.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:06.294 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:06.294 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.294 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.294 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.294 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:06.294 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:06.294 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:06.556 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:23:06.556 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:06.556 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:06.556 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:06.556 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:06.556 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:06.556 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:06.556 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.556 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.556 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.556 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:06.556 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:06.556 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:06.816 00:23:06.816 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:06.816 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:06.816 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:07.077 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.077 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:07.077 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.077 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.077 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.077 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:07.077 { 00:23:07.077 "cntlid": 107, 00:23:07.077 "qid": 0, 00:23:07.077 "state": "enabled", 00:23:07.077 "thread": "nvmf_tgt_poll_group_000", 00:23:07.077 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:07.077 "listen_address": { 00:23:07.077 "trtype": "TCP", 00:23:07.077 "adrfam": "IPv4", 00:23:07.077 "traddr": "10.0.0.2", 00:23:07.077 "trsvcid": "4420" 00:23:07.077 }, 00:23:07.077 "peer_address": { 00:23:07.077 "trtype": "TCP", 00:23:07.077 "adrfam": "IPv4", 00:23:07.077 "traddr": "10.0.0.1", 00:23:07.077 "trsvcid": "34976" 00:23:07.077 }, 00:23:07.077 "auth": { 00:23:07.077 "state": "completed", 00:23:07.077 "digest": "sha512", 00:23:07.077 "dhgroup": "ffdhe2048" 00:23:07.077 } 00:23:07.077 } 00:23:07.077 ]' 00:23:07.077 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:07.077 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:07.077 14:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:07.077 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:07.077 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:07.077 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:07.077 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:07.077 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:07.337 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:23:07.337 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:23:07.908 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:07.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:07.908 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:07.908 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.908 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.908 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.908 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:07.908 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:07.908 14:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:08.169 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:23:08.169 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:08.169 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:08.169 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:08.169 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:08.169 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:08.169 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:08.169 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.169 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.169 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.169 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:08.169 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:08.169 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:08.430 00:23:08.430 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:08.430 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:08.430 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:08.690 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.690 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:08.690 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.690 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.690 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.690 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:08.690 { 00:23:08.690 "cntlid": 109, 00:23:08.690 "qid": 0, 00:23:08.690 "state": "enabled", 00:23:08.690 "thread": "nvmf_tgt_poll_group_000", 00:23:08.690 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:08.690 "listen_address": { 00:23:08.690 "trtype": "TCP", 00:23:08.690 "adrfam": "IPv4", 00:23:08.690 "traddr": "10.0.0.2", 00:23:08.690 "trsvcid": "4420" 00:23:08.690 }, 00:23:08.690 "peer_address": { 00:23:08.690 "trtype": "TCP", 00:23:08.690 "adrfam": "IPv4", 00:23:08.690 "traddr": "10.0.0.1", 00:23:08.690 "trsvcid": "35010" 00:23:08.690 }, 00:23:08.690 "auth": { 00:23:08.690 "state": "completed", 00:23:08.690 "digest": "sha512", 00:23:08.690 "dhgroup": "ffdhe2048" 00:23:08.690 } 00:23:08.690 } 00:23:08.690 ]' 00:23:08.690 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:08.690 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:08.690 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:08.690 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:08.690 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:08.690 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:08.690 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:08.690 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:08.949 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:23:08.949 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:23:09.521 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:09.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:09.521 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:09.521 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.521 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.521 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.521 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:09.521 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:09.521 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:09.807 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:23:09.807 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:09.807 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:09.807 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:09.807 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:09.807 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:09.807 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:09.807 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.807 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.807 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.807 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:09.807 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:09.807 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:10.068 00:23:10.068 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:10.068 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.068 14:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:10.068 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.068 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:10.068 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.068 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.068 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.068 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:10.068 { 00:23:10.068 "cntlid": 111, 00:23:10.068 "qid": 0, 00:23:10.068 "state": "enabled", 00:23:10.068 "thread": "nvmf_tgt_poll_group_000", 00:23:10.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:10.068 "listen_address": { 00:23:10.068 "trtype": "TCP", 00:23:10.068 "adrfam": "IPv4", 00:23:10.068 "traddr": "10.0.0.2", 00:23:10.068 "trsvcid": "4420" 00:23:10.068 }, 00:23:10.068 "peer_address": { 00:23:10.068 "trtype": "TCP", 00:23:10.068 "adrfam": "IPv4", 00:23:10.068 "traddr": "10.0.0.1", 00:23:10.068 "trsvcid": "60022" 00:23:10.068 }, 00:23:10.068 "auth": { 00:23:10.068 "state": "completed", 00:23:10.068 "digest": "sha512", 00:23:10.068 "dhgroup": "ffdhe2048" 00:23:10.068 } 00:23:10.068 } 00:23:10.068 ]' 00:23:10.068 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:10.329 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:10.329 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:10.329 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:10.329 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:10.329 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:10.329 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:10.329 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.589 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:23:10.589 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:23:11.159 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:11.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:11.159 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:11.159 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.159 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.159 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.159 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:11.159 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:11.159 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:11.159 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:11.419 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:23:11.419 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:11.419 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:11.419 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:11.419 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:11.419 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:11.419 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:11.419 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.419 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.419 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.419 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:11.419 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:11.419 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:11.419 00:23:11.680 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:11.680 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:11.680 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:11.680 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.680 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:11.680 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.680 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.680 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.680 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:11.680 { 00:23:11.680 "cntlid": 113, 00:23:11.680 "qid": 0, 00:23:11.680 "state": "enabled", 00:23:11.680 "thread": "nvmf_tgt_poll_group_000", 00:23:11.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:11.680 "listen_address": { 00:23:11.680 "trtype": "TCP", 00:23:11.680 "adrfam": "IPv4", 00:23:11.680 "traddr": "10.0.0.2", 00:23:11.680 "trsvcid": "4420" 00:23:11.680 }, 00:23:11.680 "peer_address": { 00:23:11.680 "trtype": "TCP", 00:23:11.680 "adrfam": "IPv4", 00:23:11.680 "traddr": "10.0.0.1", 00:23:11.680 "trsvcid": "60046" 00:23:11.680 }, 00:23:11.680 "auth": { 00:23:11.680 "state": "completed", 00:23:11.680 "digest": "sha512", 00:23:11.680 "dhgroup": "ffdhe3072" 00:23:11.680 } 00:23:11.680 } 00:23:11.680 ]' 00:23:11.680 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:11.680 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:11.680 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:11.942 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:11.942 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:11.942 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:11.942 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:11.942 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:11.942 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:23:11.942 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:23:12.884 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:12.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:12.884 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:12.884 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.884 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.884 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.884 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:12.884 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:12.884 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:12.884 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:23:12.884 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:12.884 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:12.884 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:12.884 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:12.884 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:12.885 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.885 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.885 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.885 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.885 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.885 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.885 14:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:13.145 00:23:13.145 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:13.145 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:13.145 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.407 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.407 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:13.407 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.407 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.407 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.408 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:13.408 { 00:23:13.408 "cntlid": 115, 00:23:13.408 "qid": 0, 00:23:13.408 "state": "enabled", 00:23:13.408 "thread": "nvmf_tgt_poll_group_000", 00:23:13.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:13.408 "listen_address": { 00:23:13.408 "trtype": "TCP", 00:23:13.408 "adrfam": "IPv4", 00:23:13.408 "traddr": "10.0.0.2", 00:23:13.408 "trsvcid": "4420" 00:23:13.408 }, 00:23:13.408 "peer_address": { 00:23:13.408 "trtype": "TCP", 00:23:13.408 "adrfam": "IPv4", 00:23:13.408 "traddr": "10.0.0.1", 00:23:13.408 "trsvcid": "60066" 00:23:13.408 }, 00:23:13.408 "auth": { 00:23:13.408 "state": "completed", 00:23:13.408 "digest": "sha512", 00:23:13.408 "dhgroup": "ffdhe3072" 00:23:13.408 } 00:23:13.408 } 00:23:13.408 ]' 00:23:13.408 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:13.408 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:13.408 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:13.408 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:13.408 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:13.408 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:13.408 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:13.408 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:13.668 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:23:13.668 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:23:14.251 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:14.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:14.251 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:14.251 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.251 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.251 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.251 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:14.251 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:14.251 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:14.519 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:23:14.519 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:14.519 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:14.519 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:14.519 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:14.519 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:14.519 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:14.519 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.519 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.519 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.519 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:14.519 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:14.519 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:14.780 00:23:14.780 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:14.780 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:14.780 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:15.041 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.041 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:15.041 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.041 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.041 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.041 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:15.041 { 00:23:15.041 "cntlid": 117, 00:23:15.041 "qid": 0, 00:23:15.041 "state": "enabled", 00:23:15.041 "thread": "nvmf_tgt_poll_group_000", 00:23:15.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:15.041 "listen_address": { 00:23:15.041 "trtype": "TCP", 00:23:15.041 "adrfam": "IPv4", 00:23:15.041 "traddr": "10.0.0.2", 00:23:15.041 "trsvcid": "4420" 00:23:15.041 }, 00:23:15.041 "peer_address": { 00:23:15.041 "trtype": "TCP", 00:23:15.041 "adrfam": "IPv4", 00:23:15.041 "traddr": "10.0.0.1", 00:23:15.041 "trsvcid": "60098" 00:23:15.041 }, 00:23:15.041 "auth": { 00:23:15.041 "state": "completed", 00:23:15.041 "digest": "sha512", 00:23:15.041 "dhgroup": "ffdhe3072" 00:23:15.041 } 00:23:15.041 } 00:23:15.041 ]' 00:23:15.041 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:15.041 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:15.041 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:15.041 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:15.042 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:15.042 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:15.042 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:15.042 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.301 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:23:15.301 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:23:15.873 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:15.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:15.873 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:15.873 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.873 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.873 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.873 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:15.873 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:15.873 14:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:16.134 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:23:16.134 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:16.134 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:16.134 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:16.134 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:16.134 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:16.134 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:16.134 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.134 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.134 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.134 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:16.134 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:16.134 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:16.396 00:23:16.396 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:16.396 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:16.396 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:16.657 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.657 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:16.657 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.657 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.657 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.657 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:16.657 { 00:23:16.657 "cntlid": 119, 00:23:16.657 "qid": 0, 00:23:16.657 "state": "enabled", 00:23:16.657 "thread": "nvmf_tgt_poll_group_000", 00:23:16.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:16.657 "listen_address": { 00:23:16.657 "trtype": "TCP", 00:23:16.657 "adrfam": "IPv4", 00:23:16.657 "traddr": "10.0.0.2", 00:23:16.657 "trsvcid": "4420" 00:23:16.657 }, 00:23:16.657 "peer_address": { 00:23:16.657 "trtype": "TCP", 00:23:16.657 "adrfam": "IPv4", 00:23:16.657 "traddr": "10.0.0.1", 00:23:16.657 "trsvcid": "60114" 00:23:16.657 }, 00:23:16.657 "auth": { 00:23:16.657 "state": "completed", 00:23:16.657 "digest": "sha512", 00:23:16.657 "dhgroup": "ffdhe3072" 00:23:16.657 } 00:23:16.657 } 00:23:16.657 ]' 00:23:16.657 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:16.657 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:16.657 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:16.657 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:16.657 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:16.657 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:16.657 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:16.657 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:16.918 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:23:16.918 14:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:23:17.490 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:17.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:17.490 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:17.490 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.490 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.490 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.490 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:17.490 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:17.490 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:17.490 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:17.751 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:23:17.751 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:17.751 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:17.751 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:17.751 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:17.751 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:17.751 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.751 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.751 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.751 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.751 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.751 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.751 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:18.011 00:23:18.011 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:18.011 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:18.011 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:18.272 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.272 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:18.272 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.272 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.272 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.272 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:18.272 { 00:23:18.272 "cntlid": 121, 00:23:18.272 "qid": 0, 00:23:18.272 "state": "enabled", 00:23:18.272 "thread": "nvmf_tgt_poll_group_000", 00:23:18.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:18.272 "listen_address": { 00:23:18.272 "trtype": "TCP", 00:23:18.272 "adrfam": "IPv4", 00:23:18.272 "traddr": "10.0.0.2", 00:23:18.272 "trsvcid": "4420" 00:23:18.272 }, 00:23:18.272 "peer_address": { 00:23:18.272 "trtype": "TCP", 00:23:18.272 "adrfam": "IPv4", 00:23:18.272 "traddr": "10.0.0.1", 00:23:18.272 "trsvcid": "60152" 00:23:18.272 }, 00:23:18.272 "auth": { 00:23:18.272 "state": "completed", 00:23:18.272 "digest": "sha512", 00:23:18.272 "dhgroup": "ffdhe4096" 00:23:18.272 } 00:23:18.272 } 00:23:18.272 ]' 00:23:18.272 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:18.272 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:18.272 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:18.272 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:18.273 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:18.273 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:18.273 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:18.273 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:18.533 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:23:18.533 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:23:19.107 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:19.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:19.107 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:19.107 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.107 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.107 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.107 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:19.107 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:19.107 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:19.368 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:23:19.368 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:19.368 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:19.368 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:19.368 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:19.368 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:19.368 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.368 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.368 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.368 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.368 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.368 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.368 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.628 00:23:19.628 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:19.628 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:19.628 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:19.888 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.888 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:19.888 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.888 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.888 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.888 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:19.888 { 00:23:19.888 "cntlid": 123, 00:23:19.888 "qid": 0, 00:23:19.888 "state": "enabled", 00:23:19.888 "thread": "nvmf_tgt_poll_group_000", 00:23:19.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:19.888 "listen_address": { 00:23:19.888 "trtype": "TCP", 00:23:19.888 "adrfam": "IPv4", 00:23:19.888 "traddr": "10.0.0.2", 00:23:19.888 "trsvcid": "4420" 00:23:19.888 }, 00:23:19.888 "peer_address": { 00:23:19.888 "trtype": "TCP", 00:23:19.888 "adrfam": "IPv4", 00:23:19.888 "traddr": "10.0.0.1", 00:23:19.888 "trsvcid": "60184" 00:23:19.888 }, 00:23:19.888 "auth": { 00:23:19.888 "state": "completed", 00:23:19.888 "digest": "sha512", 00:23:19.888 "dhgroup": "ffdhe4096" 00:23:19.888 } 00:23:19.888 } 00:23:19.888 ]' 00:23:19.888 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:19.888 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:19.889 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:19.889 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:19.889 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:19.889 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:19.889 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:19.889 14:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:20.149 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:23:20.149 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:23:20.721 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:20.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:20.721 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:20.721 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.721 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.721 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.721 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:20.721 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:20.721 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:20.981 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:23:20.981 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:20.981 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:20.981 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:20.981 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:20.981 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:20.981 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.981 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.981 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.981 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.981 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.981 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.982 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:21.242 00:23:21.242 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:21.242 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:21.242 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:21.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:21.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:21.502 { 00:23:21.502 "cntlid": 125, 00:23:21.502 "qid": 0, 00:23:21.502 "state": "enabled", 00:23:21.502 "thread": "nvmf_tgt_poll_group_000", 00:23:21.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:21.502 "listen_address": { 00:23:21.502 "trtype": "TCP", 00:23:21.502 "adrfam": "IPv4", 00:23:21.502 "traddr": "10.0.0.2", 00:23:21.502 "trsvcid": "4420" 00:23:21.502 }, 00:23:21.502 "peer_address": { 00:23:21.502 "trtype": "TCP", 00:23:21.502 "adrfam": "IPv4", 00:23:21.502 "traddr": "10.0.0.1", 00:23:21.502 "trsvcid": "52396" 00:23:21.502 }, 00:23:21.502 "auth": { 00:23:21.502 "state": "completed", 00:23:21.502 "digest": "sha512", 00:23:21.502 "dhgroup": "ffdhe4096" 00:23:21.502 } 00:23:21.502 } 00:23:21.502 ]' 00:23:21.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:21.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:21.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:21.502 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:21.503 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:21.503 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:21.503 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:21.503 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:21.763 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:23:21.763 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:23:22.334 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:22.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:22.334 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:22.334 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.334 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.334 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.334 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:22.334 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:22.334 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:22.594 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:23:22.595 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:22.595 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:22.595 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:22.595 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:22.595 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:22.595 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:22.595 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.595 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.595 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.595 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:22.595 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:22.595 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:22.857 00:23:22.857 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:22.857 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:22.857 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:23.117 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.117 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:23.117 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.117 14:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.117 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.117 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:23.117 { 00:23:23.117 "cntlid": 127, 00:23:23.117 "qid": 0, 00:23:23.117 "state": "enabled", 00:23:23.117 "thread": "nvmf_tgt_poll_group_000", 00:23:23.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:23.117 "listen_address": { 00:23:23.117 "trtype": "TCP", 00:23:23.117 "adrfam": "IPv4", 00:23:23.117 "traddr": "10.0.0.2", 00:23:23.117 "trsvcid": "4420" 00:23:23.117 }, 00:23:23.117 "peer_address": { 00:23:23.117 "trtype": "TCP", 00:23:23.117 "adrfam": "IPv4", 00:23:23.117 "traddr": "10.0.0.1", 00:23:23.117 "trsvcid": "52420" 00:23:23.117 }, 00:23:23.117 "auth": { 00:23:23.117 "state": "completed", 00:23:23.117 "digest": "sha512", 00:23:23.117 "dhgroup": "ffdhe4096" 00:23:23.117 } 00:23:23.117 } 00:23:23.117 ]' 00:23:23.117 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:23.117 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:23.117 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:23.117 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:23.117 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:23.117 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:23.117 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:23.117 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:23.378 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:23:23.378 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:23:23.947 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:23.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:23.947 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:23.947 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.947 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.947 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.947 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:23.947 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:23.947 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:23.947 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:24.207 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:23:24.207 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:24.207 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:24.207 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:24.207 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:24.207 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:24.207 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:24.207 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.207 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.207 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.207 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:24.207 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:24.207 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:24.467 00:23:24.467 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:24.467 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:24.467 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:24.727 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.727 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:24.727 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.728 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.728 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.728 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:24.728 { 00:23:24.728 "cntlid": 129, 00:23:24.728 "qid": 0, 00:23:24.728 "state": "enabled", 00:23:24.728 "thread": "nvmf_tgt_poll_group_000", 00:23:24.728 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:24.728 "listen_address": { 00:23:24.728 "trtype": "TCP", 00:23:24.728 "adrfam": "IPv4", 00:23:24.728 "traddr": "10.0.0.2", 00:23:24.728 "trsvcid": "4420" 00:23:24.728 }, 00:23:24.728 "peer_address": { 00:23:24.728 "trtype": "TCP", 00:23:24.728 "adrfam": "IPv4", 00:23:24.728 "traddr": "10.0.0.1", 00:23:24.728 "trsvcid": "52448" 00:23:24.728 }, 00:23:24.728 "auth": { 00:23:24.728 "state": "completed", 00:23:24.728 "digest": "sha512", 00:23:24.728 "dhgroup": "ffdhe6144" 00:23:24.728 } 00:23:24.728 } 00:23:24.728 ]' 00:23:24.728 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:24.728 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:24.728 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:24.728 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:24.728 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:24.989 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:24.989 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:24.989 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:24.989 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:23:24.990 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:23:25.932 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:25.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:25.932 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:25.932 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.932 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.932 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.932 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:25.932 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:25.932 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:25.932 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:23:25.932 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:25.932 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:25.932 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:25.932 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:25.932 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:25.932 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.932 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.932 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.932 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.932 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.932 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.932 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:26.193 00:23:26.193 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:26.193 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:26.193 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:26.454 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.454 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:26.454 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.454 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.454 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.454 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:26.454 { 00:23:26.454 "cntlid": 131, 00:23:26.454 "qid": 0, 00:23:26.454 "state": "enabled", 00:23:26.454 "thread": "nvmf_tgt_poll_group_000", 00:23:26.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:26.454 "listen_address": { 00:23:26.454 "trtype": "TCP", 00:23:26.454 "adrfam": "IPv4", 00:23:26.454 "traddr": "10.0.0.2", 00:23:26.454 "trsvcid": "4420" 00:23:26.454 }, 00:23:26.454 "peer_address": { 00:23:26.454 "trtype": "TCP", 00:23:26.454 "adrfam": "IPv4", 00:23:26.454 "traddr": "10.0.0.1", 00:23:26.454 "trsvcid": "52480" 00:23:26.454 }, 00:23:26.454 "auth": { 00:23:26.454 "state": "completed", 00:23:26.454 "digest": "sha512", 00:23:26.454 "dhgroup": "ffdhe6144" 00:23:26.454 } 00:23:26.454 } 00:23:26.454 ]' 00:23:26.454 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:26.454 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:26.454 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:26.454 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:26.454 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:26.714 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:26.714 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:26.714 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:26.714 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:23:26.714 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:23:27.284 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:27.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:27.545 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:27.545 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.545 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.545 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.545 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:27.545 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:27.545 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:27.545 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:23:27.545 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:27.545 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:27.545 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:27.545 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:27.545 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:27.545 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.545 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.545 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.545 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.545 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.545 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.545 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:28.116 00:23:28.116 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:28.116 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:28.116 14:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:28.116 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.116 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:28.116 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.116 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.116 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.116 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:28.116 { 00:23:28.116 "cntlid": 133, 00:23:28.116 "qid": 0, 00:23:28.116 "state": "enabled", 00:23:28.116 "thread": "nvmf_tgt_poll_group_000", 00:23:28.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:28.116 "listen_address": { 00:23:28.116 "trtype": "TCP", 00:23:28.116 "adrfam": "IPv4", 00:23:28.116 "traddr": "10.0.0.2", 00:23:28.116 "trsvcid": "4420" 00:23:28.116 }, 00:23:28.116 "peer_address": { 00:23:28.116 "trtype": "TCP", 00:23:28.116 "adrfam": "IPv4", 00:23:28.116 "traddr": "10.0.0.1", 00:23:28.116 "trsvcid": "52506" 00:23:28.116 }, 00:23:28.116 "auth": { 00:23:28.116 "state": "completed", 00:23:28.116 "digest": "sha512", 00:23:28.116 "dhgroup": "ffdhe6144" 00:23:28.116 } 00:23:28.116 } 00:23:28.116 ]' 00:23:28.116 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:28.116 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:28.116 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:28.377 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:28.377 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:28.377 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:28.377 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:28.377 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:28.638 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:23:28.638 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:23:29.210 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:29.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:29.210 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:29.210 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.210 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.210 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.210 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:29.210 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:29.210 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:29.471 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:23:29.471 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:29.471 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:29.471 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:29.471 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:29.471 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:29.471 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:29.471 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.471 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.471 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.471 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:29.471 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:29.471 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:29.731 00:23:29.731 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:29.731 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:29.731 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:29.992 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.992 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:29.992 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.992 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.992 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.992 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:29.992 { 00:23:29.992 "cntlid": 135, 00:23:29.992 "qid": 0, 00:23:29.992 "state": "enabled", 00:23:29.992 "thread": "nvmf_tgt_poll_group_000", 00:23:29.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:29.992 "listen_address": { 00:23:29.992 "trtype": "TCP", 00:23:29.992 "adrfam": "IPv4", 00:23:29.992 "traddr": "10.0.0.2", 00:23:29.992 "trsvcid": "4420" 00:23:29.992 }, 00:23:29.992 "peer_address": { 00:23:29.992 "trtype": "TCP", 00:23:29.992 "adrfam": "IPv4", 00:23:29.992 "traddr": "10.0.0.1", 00:23:29.992 "trsvcid": "52518" 00:23:29.992 }, 00:23:29.992 "auth": { 00:23:29.992 "state": "completed", 00:23:29.992 "digest": "sha512", 00:23:29.992 "dhgroup": "ffdhe6144" 00:23:29.992 } 00:23:29.992 } 00:23:29.992 ]' 00:23:29.992 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:29.992 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:29.992 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:29.992 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:29.992 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:29.993 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:29.993 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:29.993 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:30.253 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:23:30.253 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:23:30.824 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:30.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:30.824 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:30.824 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.824 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.824 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.824 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:30.824 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:30.824 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:30.824 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:31.085 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:23:31.085 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:31.085 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:31.085 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:31.085 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:31.085 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:31.085 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.085 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.085 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.085 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.085 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.085 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.085 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.656 00:23:31.656 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:31.657 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:31.657 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:31.657 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.657 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:31.657 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.657 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.657 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.657 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:31.657 { 00:23:31.657 "cntlid": 137, 00:23:31.657 "qid": 0, 00:23:31.657 "state": "enabled", 00:23:31.657 "thread": "nvmf_tgt_poll_group_000", 00:23:31.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:31.657 "listen_address": { 00:23:31.657 "trtype": "TCP", 00:23:31.657 "adrfam": "IPv4", 00:23:31.657 "traddr": "10.0.0.2", 00:23:31.657 "trsvcid": "4420" 00:23:31.657 }, 00:23:31.657 "peer_address": { 00:23:31.657 "trtype": "TCP", 00:23:31.657 "adrfam": "IPv4", 00:23:31.657 "traddr": "10.0.0.1", 00:23:31.657 "trsvcid": "32922" 00:23:31.657 }, 00:23:31.657 "auth": { 00:23:31.657 "state": "completed", 00:23:31.657 "digest": "sha512", 00:23:31.657 "dhgroup": "ffdhe8192" 00:23:31.657 } 00:23:31.657 } 00:23:31.657 ]' 00:23:31.657 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:31.657 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:31.917 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:31.917 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:31.917 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:31.917 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:31.917 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:31.917 14:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:32.179 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:23:32.179 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:23:32.753 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:32.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:32.753 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:32.753 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.753 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.753 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.753 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:32.753 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:32.753 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:33.014 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:23:33.014 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:33.014 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:33.014 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:33.014 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:33.014 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:33.015 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.015 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.015 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.015 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.015 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.015 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.015 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.275 00:23:33.275 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:33.275 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:33.275 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:33.536 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.536 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:33.536 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.536 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.536 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.536 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:33.536 { 00:23:33.536 "cntlid": 139, 00:23:33.536 "qid": 0, 00:23:33.536 "state": "enabled", 00:23:33.536 "thread": "nvmf_tgt_poll_group_000", 00:23:33.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:33.536 "listen_address": { 00:23:33.536 "trtype": "TCP", 00:23:33.536 "adrfam": "IPv4", 00:23:33.536 "traddr": "10.0.0.2", 00:23:33.536 "trsvcid": "4420" 00:23:33.536 }, 00:23:33.536 "peer_address": { 00:23:33.536 "trtype": "TCP", 00:23:33.536 "adrfam": "IPv4", 00:23:33.536 "traddr": "10.0.0.1", 00:23:33.536 "trsvcid": "32938" 00:23:33.536 }, 00:23:33.536 "auth": { 00:23:33.536 "state": "completed", 00:23:33.536 "digest": "sha512", 00:23:33.536 "dhgroup": "ffdhe8192" 00:23:33.536 } 00:23:33.536 } 00:23:33.536 ]' 00:23:33.536 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:33.536 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:33.536 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:33.798 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:33.798 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:33.798 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:33.798 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:33.798 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:34.060 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:23:34.060 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: --dhchap-ctrl-secret DHHC-1:02:OTNlNDQwZWQwNTNhODFmZjM2ZGQ2ZmZhNzQyZGU2ZDg0MGZhOWI1ZjA3ZWY3MmM4V44eRg==: 00:23:34.630 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:34.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:34.630 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:34.630 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.630 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.630 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.630 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:34.630 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:34.630 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:34.890 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:23:34.890 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:34.890 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:34.890 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:34.890 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:34.890 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:34.890 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.890 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.890 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.890 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.890 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.890 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.890 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:35.151 00:23:35.151 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:35.151 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:35.151 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:35.411 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.411 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:35.411 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.411 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.411 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.411 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:35.411 { 00:23:35.411 "cntlid": 141, 00:23:35.411 "qid": 0, 00:23:35.411 "state": "enabled", 00:23:35.411 "thread": "nvmf_tgt_poll_group_000", 00:23:35.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:35.411 "listen_address": { 00:23:35.411 "trtype": "TCP", 00:23:35.411 "adrfam": "IPv4", 00:23:35.411 "traddr": "10.0.0.2", 00:23:35.412 "trsvcid": "4420" 00:23:35.412 }, 00:23:35.412 "peer_address": { 00:23:35.412 "trtype": "TCP", 00:23:35.412 "adrfam": "IPv4", 00:23:35.412 "traddr": "10.0.0.1", 00:23:35.412 "trsvcid": "32962" 00:23:35.412 }, 00:23:35.412 "auth": { 00:23:35.412 "state": "completed", 00:23:35.412 "digest": "sha512", 00:23:35.412 "dhgroup": "ffdhe8192" 00:23:35.412 } 00:23:35.412 } 00:23:35.412 ]' 00:23:35.412 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:35.412 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:35.412 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:35.730 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:35.730 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:35.730 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:35.730 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:35.730 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:35.730 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:23:35.730 14:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:01:NDUwMDFjZmM1NzhiOTY2ZTZiMmVjY2UyYmEwNzA3OWHThZ03: 00:23:36.300 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:36.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:36.560 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:36.560 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.560 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.560 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.560 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:36.560 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:36.560 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:36.560 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:23:36.560 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:36.560 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:36.560 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:36.560 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:36.560 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:36.560 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:36.560 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.560 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.561 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.561 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:36.561 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:36.561 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:37.129 00:23:37.129 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:37.129 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:37.129 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:37.390 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.390 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:37.390 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.390 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.390 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.390 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:37.390 { 00:23:37.390 "cntlid": 143, 00:23:37.390 "qid": 0, 00:23:37.390 "state": "enabled", 00:23:37.390 "thread": "nvmf_tgt_poll_group_000", 00:23:37.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:37.390 "listen_address": { 00:23:37.390 "trtype": "TCP", 00:23:37.390 "adrfam": "IPv4", 00:23:37.390 "traddr": "10.0.0.2", 00:23:37.390 "trsvcid": "4420" 00:23:37.390 }, 00:23:37.390 "peer_address": { 00:23:37.390 "trtype": "TCP", 00:23:37.390 "adrfam": "IPv4", 00:23:37.390 "traddr": "10.0.0.1", 00:23:37.390 "trsvcid": "32980" 00:23:37.390 }, 00:23:37.390 "auth": { 00:23:37.390 "state": "completed", 00:23:37.390 "digest": "sha512", 00:23:37.390 "dhgroup": "ffdhe8192" 00:23:37.390 } 00:23:37.390 } 00:23:37.390 ]' 00:23:37.390 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:37.390 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:37.390 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:37.390 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:37.390 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:37.390 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:37.390 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:37.390 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:37.650 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:23:37.650 14:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:23:38.221 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:38.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:38.221 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:38.221 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.221 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.221 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.221 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:38.221 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:23:38.221 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:38.221 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:38.221 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:38.221 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:38.483 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:23:38.483 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:38.483 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:38.483 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:38.483 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:38.483 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:38.483 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.483 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.483 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.483 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.483 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.483 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.483 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:39.053 00:23:39.053 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:39.053 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:39.053 14:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:39.053 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.053 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:39.053 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.053 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.053 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.053 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:39.053 { 00:23:39.053 "cntlid": 145, 00:23:39.053 "qid": 0, 00:23:39.053 "state": "enabled", 00:23:39.053 "thread": "nvmf_tgt_poll_group_000", 00:23:39.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:39.053 "listen_address": { 00:23:39.053 "trtype": "TCP", 00:23:39.054 "adrfam": "IPv4", 00:23:39.054 "traddr": "10.0.0.2", 00:23:39.054 "trsvcid": "4420" 00:23:39.054 }, 00:23:39.054 "peer_address": { 00:23:39.054 "trtype": "TCP", 00:23:39.054 "adrfam": "IPv4", 00:23:39.054 "traddr": "10.0.0.1", 00:23:39.054 "trsvcid": "32998" 00:23:39.054 }, 00:23:39.054 "auth": { 00:23:39.054 "state": "completed", 00:23:39.054 "digest": "sha512", 00:23:39.054 "dhgroup": "ffdhe8192" 00:23:39.054 } 00:23:39.054 } 00:23:39.054 ]' 00:23:39.054 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:39.314 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:39.314 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:39.314 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:39.314 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:39.314 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:39.314 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:39.314 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:39.669 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:23:39.669 14:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:ZTJmYzM5OTM4NzhiN2Y4NjhkNWI2MGRjNmFlYWE5YWM4MWM0YWI4YmMxOWE0NDVkK/4n9A==: --dhchap-ctrl-secret DHHC-1:03:MTE4NGY2MmQwODdiZTJiOGUwYTJmMzRmNDE4MGVlYmZjM2JhYzA5NThhZjVkMTY3YzQyZDIwMzYwYjUyNmQxYRyTJV4=: 00:23:39.981 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:40.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:40.247 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:40.247 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.247 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.247 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.247 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:23:40.247 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.247 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.247 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.247 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:23:40.247 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:40.247 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:23:40.247 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:40.247 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.247 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:40.247 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.247 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:23:40.247 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:40.247 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:40.508 request: 00:23:40.508 { 00:23:40.508 "name": "nvme0", 00:23:40.508 "trtype": "tcp", 00:23:40.508 "traddr": "10.0.0.2", 00:23:40.508 "adrfam": "ipv4", 00:23:40.508 "trsvcid": "4420", 00:23:40.508 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:40.508 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:40.508 "prchk_reftag": false, 00:23:40.508 "prchk_guard": false, 00:23:40.508 "hdgst": false, 00:23:40.508 "ddgst": false, 00:23:40.508 "dhchap_key": "key2", 00:23:40.508 "allow_unrecognized_csi": false, 00:23:40.508 "method": "bdev_nvme_attach_controller", 00:23:40.508 "req_id": 1 00:23:40.508 } 00:23:40.508 Got JSON-RPC error response 00:23:40.508 response: 00:23:40.508 { 00:23:40.508 "code": -5, 00:23:40.508 "message": "Input/output error" 00:23:40.508 } 00:23:40.508 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:40.508 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:40.508 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:40.508 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:40.508 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:40.508 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.508 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.508 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.508 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:40.508 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.508 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.508 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.508 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:40.508 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:40.508 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:40.508 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:40.508 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.508 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:40.508 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.508 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:40.508 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:40.508 14:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:41.079 request: 00:23:41.079 { 00:23:41.079 "name": "nvme0", 00:23:41.079 "trtype": "tcp", 00:23:41.079 "traddr": "10.0.0.2", 00:23:41.079 "adrfam": "ipv4", 00:23:41.079 "trsvcid": "4420", 00:23:41.079 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:41.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:41.079 "prchk_reftag": false, 00:23:41.079 "prchk_guard": false, 00:23:41.079 "hdgst": false, 00:23:41.079 "ddgst": false, 00:23:41.079 "dhchap_key": "key1", 00:23:41.079 "dhchap_ctrlr_key": "ckey2", 00:23:41.079 "allow_unrecognized_csi": false, 00:23:41.079 "method": "bdev_nvme_attach_controller", 00:23:41.079 "req_id": 1 00:23:41.079 } 00:23:41.079 Got JSON-RPC error response 00:23:41.079 response: 00:23:41.079 { 00:23:41.079 "code": -5, 00:23:41.079 "message": "Input/output error" 00:23:41.079 } 00:23:41.079 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:41.079 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:41.079 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:41.079 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:41.079 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:41.079 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.079 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.080 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.080 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:23:41.080 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.080 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.080 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.080 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:41.080 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:41.080 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:41.080 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:41.080 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:41.080 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:41.080 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:41.080 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:41.080 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:41.080 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:41.652 request: 00:23:41.652 { 00:23:41.652 "name": "nvme0", 00:23:41.652 "trtype": "tcp", 00:23:41.652 "traddr": "10.0.0.2", 00:23:41.652 "adrfam": "ipv4", 00:23:41.652 "trsvcid": "4420", 00:23:41.652 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:41.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:41.652 "prchk_reftag": false, 00:23:41.652 "prchk_guard": false, 00:23:41.652 "hdgst": false, 00:23:41.652 "ddgst": false, 00:23:41.652 "dhchap_key": "key1", 00:23:41.652 "dhchap_ctrlr_key": "ckey1", 00:23:41.652 "allow_unrecognized_csi": false, 00:23:41.652 "method": "bdev_nvme_attach_controller", 00:23:41.652 "req_id": 1 00:23:41.652 } 00:23:41.652 Got JSON-RPC error response 00:23:41.652 response: 00:23:41.652 { 00:23:41.652 "code": -5, 00:23:41.652 "message": "Input/output error" 00:23:41.652 } 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3406124 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3406124 ']' 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3406124 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3406124 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3406124' 00:23:41.652 killing process with pid 3406124 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3406124 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3406124 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3432418 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3432418 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3432418 ']' 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.652 14:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.593 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:42.593 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:23:42.593 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:42.593 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:42.593 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.593 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.593 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:42.593 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3432418 00:23:42.593 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3432418 ']' 00:23:42.593 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.593 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:42.593 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.593 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:42.593 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.854 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:42.854 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:23:42.854 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:23:42.854 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.854 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.854 null0 00:23:42.854 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.854 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:42.854 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.aGc 00:23:42.854 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.854 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.854 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.854 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.NAN ]] 00:23:42.854 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.NAN 00:23:42.854 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.854 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.854 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.854 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:42.855 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.xVP 00:23:42.855 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.855 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.855 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.855 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.tg4 ]] 00:23:42.855 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tg4 00:23:42.855 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.855 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.855 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.855 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:42.855 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.CEy 00:23:42.855 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.855 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.855 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.855 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.otD ]] 00:23:42.855 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.otD 00:23:42.855 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.855 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.855 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.855 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:42.855 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Ci8 00:23:42.855 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.855 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.116 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.116 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:23:43.116 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:23:43.116 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:43.116 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:43.116 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:43.116 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:43.116 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:43.116 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:43.116 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.116 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.116 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.116 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:43.116 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:43.116 14:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:43.687 nvme0n1 00:23:43.687 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:43.687 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:43.687 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:43.948 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.948 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:43.948 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.948 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.948 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.948 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:43.948 { 00:23:43.948 "cntlid": 1, 00:23:43.948 "qid": 0, 00:23:43.948 "state": "enabled", 00:23:43.948 "thread": "nvmf_tgt_poll_group_000", 00:23:43.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:43.948 "listen_address": { 00:23:43.948 "trtype": "TCP", 00:23:43.948 "adrfam": "IPv4", 00:23:43.948 "traddr": "10.0.0.2", 00:23:43.948 "trsvcid": "4420" 00:23:43.948 }, 00:23:43.948 "peer_address": { 00:23:43.948 "trtype": "TCP", 00:23:43.948 "adrfam": "IPv4", 00:23:43.948 "traddr": "10.0.0.1", 00:23:43.948 "trsvcid": "57256" 00:23:43.948 }, 00:23:43.948 "auth": { 00:23:43.948 "state": "completed", 00:23:43.948 "digest": "sha512", 00:23:43.948 "dhgroup": "ffdhe8192" 00:23:43.948 } 00:23:43.948 } 00:23:43.948 ]' 00:23:43.948 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:43.948 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:43.948 14:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:43.948 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:43.948 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:44.209 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:44.209 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:44.209 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:44.209 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:23:44.209 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:23:45.152 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:45.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:45.152 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:45.152 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.152 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.152 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.152 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:45.152 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.152 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.152 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.152 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:45.152 14:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:45.152 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:45.152 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:45.152 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:45.152 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:45.152 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:45.152 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:45.152 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:45.152 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:45.152 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:45.152 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:45.413 request: 00:23:45.413 { 00:23:45.413 "name": "nvme0", 00:23:45.413 "trtype": "tcp", 00:23:45.413 "traddr": "10.0.0.2", 00:23:45.413 "adrfam": "ipv4", 00:23:45.413 "trsvcid": "4420", 00:23:45.413 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:45.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:45.413 "prchk_reftag": false, 00:23:45.413 "prchk_guard": false, 00:23:45.413 "hdgst": false, 00:23:45.413 "ddgst": false, 00:23:45.413 "dhchap_key": "key3", 00:23:45.413 "allow_unrecognized_csi": false, 00:23:45.413 "method": "bdev_nvme_attach_controller", 00:23:45.413 "req_id": 1 00:23:45.413 } 00:23:45.413 Got JSON-RPC error response 00:23:45.413 response: 00:23:45.413 { 00:23:45.413 "code": -5, 00:23:45.413 "message": "Input/output error" 00:23:45.413 } 00:23:45.413 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:45.413 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:45.413 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:45.413 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:45.413 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:23:45.413 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:23:45.413 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:45.413 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:45.674 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:45.674 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:45.674 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:45.674 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:45.674 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:45.675 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:45.675 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:45.675 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:45.675 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:45.675 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:45.675 request: 00:23:45.675 { 00:23:45.675 "name": "nvme0", 00:23:45.675 "trtype": "tcp", 00:23:45.675 "traddr": "10.0.0.2", 00:23:45.675 "adrfam": "ipv4", 00:23:45.675 "trsvcid": "4420", 00:23:45.675 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:45.675 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:45.675 "prchk_reftag": false, 00:23:45.675 "prchk_guard": false, 00:23:45.675 "hdgst": false, 00:23:45.675 "ddgst": false, 00:23:45.675 "dhchap_key": "key3", 00:23:45.675 "allow_unrecognized_csi": false, 00:23:45.675 "method": "bdev_nvme_attach_controller", 00:23:45.675 "req_id": 1 00:23:45.675 } 00:23:45.675 Got JSON-RPC error response 00:23:45.675 response: 00:23:45.675 { 00:23:45.675 "code": -5, 00:23:45.675 "message": "Input/output error" 00:23:45.675 } 00:23:45.675 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:45.675 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:45.675 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:45.675 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:45.675 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:45.675 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:23:45.675 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:45.675 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:45.675 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:45.675 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:45.937 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:45.937 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.937 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.937 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.937 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:45.937 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.937 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.937 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.937 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:45.937 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:45.937 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:45.937 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:45.937 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:45.937 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:45.937 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:45.937 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:45.937 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:45.937 14:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:46.198 request: 00:23:46.198 { 00:23:46.198 "name": "nvme0", 00:23:46.198 "trtype": "tcp", 00:23:46.198 "traddr": "10.0.0.2", 00:23:46.198 "adrfam": "ipv4", 00:23:46.198 "trsvcid": "4420", 00:23:46.198 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:46.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:46.198 "prchk_reftag": false, 00:23:46.198 "prchk_guard": false, 00:23:46.198 "hdgst": false, 00:23:46.198 "ddgst": false, 00:23:46.198 "dhchap_key": "key0", 00:23:46.198 "dhchap_ctrlr_key": "key1", 00:23:46.198 "allow_unrecognized_csi": false, 00:23:46.198 "method": "bdev_nvme_attach_controller", 00:23:46.198 "req_id": 1 00:23:46.198 } 00:23:46.198 Got JSON-RPC error response 00:23:46.198 response: 00:23:46.198 { 00:23:46.198 "code": -5, 00:23:46.198 "message": "Input/output error" 00:23:46.198 } 00:23:46.198 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:46.198 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:46.198 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:46.198 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:46.198 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:23:46.198 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:46.198 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:46.459 nvme0n1 00:23:46.459 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:23:46.459 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:23:46.459 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:46.720 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.720 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:46.720 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:46.981 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:23:46.981 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.981 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.981 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.981 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:46.981 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:46.981 14:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:47.560 nvme0n1 00:23:47.821 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:23:47.821 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:23:47.821 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:47.821 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.821 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:47.821 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.821 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.821 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.821 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:23:47.821 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:23:47.821 14:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:48.082 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.082 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:23:48.082 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: --dhchap-ctrl-secret DHHC-1:03:NDQyMjliY2RkZWJkMmZhYWYyOGJhMGM3ZTYyYTZlYjI1NzY2YWMzYzFiODJjZDE2NjExNWNkYjFhMGYwMGUyYZDv+MI=: 00:23:48.654 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:23:48.654 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:23:48.654 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:23:48.654 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:23:48.654 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:23:48.654 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:23:48.654 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:23:48.654 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:48.654 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:48.915 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:23:48.915 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:48.915 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:23:48.915 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:48.915 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:48.915 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:48.915 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:48.915 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:48.915 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:48.915 14:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:49.487 request: 00:23:49.487 { 00:23:49.487 "name": "nvme0", 00:23:49.487 "trtype": "tcp", 00:23:49.487 "traddr": "10.0.0.2", 00:23:49.487 "adrfam": "ipv4", 00:23:49.487 "trsvcid": "4420", 00:23:49.487 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:49.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:49.487 "prchk_reftag": false, 00:23:49.487 "prchk_guard": false, 00:23:49.487 "hdgst": false, 00:23:49.487 "ddgst": false, 00:23:49.487 "dhchap_key": "key1", 00:23:49.487 "allow_unrecognized_csi": false, 00:23:49.487 "method": "bdev_nvme_attach_controller", 00:23:49.487 "req_id": 1 00:23:49.487 } 00:23:49.487 Got JSON-RPC error response 00:23:49.487 response: 00:23:49.487 { 00:23:49.487 "code": -5, 00:23:49.487 "message": "Input/output error" 00:23:49.487 } 00:23:49.487 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:49.487 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:49.487 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:49.487 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:49.487 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:49.487 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:49.487 14:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:50.057 nvme0n1 00:23:50.057 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:23:50.057 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:50.057 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:23:50.317 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.317 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:50.317 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:50.578 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:50.578 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.578 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.578 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.578 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:23:50.578 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:50.578 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:50.578 nvme0n1 00:23:50.838 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:23:50.838 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:23:50.838 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:50.838 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.838 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:50.838 14:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:51.098 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:51.098 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.098 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.098 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.098 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: '' 2s 00:23:51.098 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:51.098 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:51.098 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: 00:23:51.098 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:23:51.098 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:51.098 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:51.098 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: ]] 00:23:51.098 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YjQwOTZjMWMwYTg3NTNkZmZhYzQzZjM3ZTk0MTIwZGZ3N69P: 00:23:51.098 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:23:51.098 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:51.098 14:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:53.009 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:23:53.009 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:23:53.269 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:53.269 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:53.269 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:53.270 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:53.270 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:23:53.270 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:23:53.270 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.270 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.270 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.270 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: 2s 00:23:53.270 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:53.270 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:53.270 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:23:53.270 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: 00:23:53.270 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:53.270 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:53.270 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:23:53.270 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: ]] 00:23:53.270 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZDZkMzMyNzNmYWJkYzAzNDQ4YmMzNGYzMGFmNWIzYmM0OTU3NDU4MTRhZGU1YTVjeEvLPw==: 00:23:53.270 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:53.270 14:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:55.192 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:23:55.192 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:23:55.192 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:55.192 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:55.192 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:55.192 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:55.192 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:23:55.192 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:55.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:55.193 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:55.193 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.193 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.193 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.193 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:55.193 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:55.193 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:56.136 nvme0n1 00:23:56.136 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:56.136 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.136 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.136 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.136 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:56.136 14:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:56.397 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:23:56.397 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:23:56.397 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:56.658 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.658 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:56.658 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.658 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.658 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.658 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:23:56.658 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:23:56.921 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:23:56.921 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:23:56.921 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:56.921 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.921 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:56.921 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.921 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.921 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.921 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:56.921 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:56.921 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:56.921 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:23:56.921 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:56.921 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:23:56.921 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:56.921 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:56.921 14:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:57.493 request: 00:23:57.493 { 00:23:57.493 "name": "nvme0", 00:23:57.493 "dhchap_key": "key1", 00:23:57.493 "dhchap_ctrlr_key": "key3", 00:23:57.493 "method": "bdev_nvme_set_keys", 00:23:57.493 "req_id": 1 00:23:57.493 } 00:23:57.493 Got JSON-RPC error response 00:23:57.493 response: 00:23:57.493 { 00:23:57.493 "code": -13, 00:23:57.493 "message": "Permission denied" 00:23:57.493 } 00:23:57.493 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:57.493 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:57.493 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:57.493 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:57.493 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:57.493 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:57.493 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:57.753 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:23:57.753 14:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:23:58.696 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:58.696 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:58.696 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:58.696 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:23:58.696 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:58.696 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.696 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.957 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.957 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:58.957 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:58.957 14:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:59.529 nvme0n1 00:23:59.529 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:59.529 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.529 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.529 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.529 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:59.529 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:59.529 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:59.529 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:23:59.529 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:59.529 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:23:59.529 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:59.529 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:59.529 14:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:00.102 request: 00:24:00.102 { 00:24:00.102 "name": "nvme0", 00:24:00.102 "dhchap_key": "key2", 00:24:00.102 "dhchap_ctrlr_key": "key0", 00:24:00.102 "method": "bdev_nvme_set_keys", 00:24:00.102 "req_id": 1 00:24:00.102 } 00:24:00.102 Got JSON-RPC error response 00:24:00.102 response: 00:24:00.102 { 00:24:00.102 "code": -13, 00:24:00.102 "message": "Permission denied" 00:24:00.102 } 00:24:00.102 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:00.102 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:00.102 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:00.102 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:00.102 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:24:00.102 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:24:00.102 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:00.363 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:24:00.363 14:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:24:01.306 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:24:01.306 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:24:01.306 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:01.566 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:24:01.566 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:24:01.566 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:24:01.566 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3406283 00:24:01.566 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3406283 ']' 00:24:01.566 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3406283 00:24:01.566 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:24:01.566 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:01.566 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3406283 00:24:01.566 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:01.566 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:01.566 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3406283' 00:24:01.566 killing process with pid 3406283 00:24:01.566 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3406283 00:24:01.566 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3406283 00:24:01.566 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:24:01.566 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:01.566 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:24:01.566 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:01.566 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:24:01.566 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:01.566 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:01.827 rmmod nvme_tcp 00:24:01.827 rmmod nvme_fabrics 00:24:01.827 rmmod nvme_keyring 00:24:01.827 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:01.827 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:24:01.827 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:24:01.827 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3432418 ']' 00:24:01.827 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3432418 00:24:01.827 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3432418 ']' 00:24:01.827 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3432418 00:24:01.827 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:24:01.827 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:01.827 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3432418 00:24:01.827 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:01.827 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:01.827 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3432418' 00:24:01.827 killing process with pid 3432418 00:24:01.827 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3432418 00:24:01.827 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3432418 00:24:01.827 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:01.827 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:01.827 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:01.827 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:24:01.827 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:24:01.827 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:01.827 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:24:01.827 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:01.827 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:01.827 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.828 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.828 14:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.374 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:04.374 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.aGc /tmp/spdk.key-sha256.xVP /tmp/spdk.key-sha384.CEy /tmp/spdk.key-sha512.Ci8 /tmp/spdk.key-sha512.NAN /tmp/spdk.key-sha384.tg4 /tmp/spdk.key-sha256.otD '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:24:04.374 00:24:04.374 real 2m36.987s 00:24:04.374 user 5m53.172s 00:24:04.374 sys 0m24.882s 00:24:04.374 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:04.374 14:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.374 ************************************ 00:24:04.374 END TEST nvmf_auth_target 00:24:04.374 ************************************ 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:04.374 ************************************ 00:24:04.374 START TEST nvmf_bdevio_no_huge 00:24:04.374 ************************************ 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:04.374 * Looking for test storage... 00:24:04.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:04.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.374 --rc genhtml_branch_coverage=1 00:24:04.374 --rc genhtml_function_coverage=1 00:24:04.374 --rc genhtml_legend=1 00:24:04.374 --rc geninfo_all_blocks=1 00:24:04.374 --rc geninfo_unexecuted_blocks=1 00:24:04.374 00:24:04.374 ' 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:04.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.374 --rc genhtml_branch_coverage=1 00:24:04.374 --rc genhtml_function_coverage=1 00:24:04.374 --rc genhtml_legend=1 00:24:04.374 --rc geninfo_all_blocks=1 00:24:04.374 --rc geninfo_unexecuted_blocks=1 00:24:04.374 00:24:04.374 ' 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:04.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.374 --rc genhtml_branch_coverage=1 00:24:04.374 --rc genhtml_function_coverage=1 00:24:04.374 --rc genhtml_legend=1 00:24:04.374 --rc geninfo_all_blocks=1 00:24:04.374 --rc geninfo_unexecuted_blocks=1 00:24:04.374 00:24:04.374 ' 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:04.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.374 --rc genhtml_branch_coverage=1 00:24:04.374 --rc genhtml_function_coverage=1 00:24:04.374 --rc genhtml_legend=1 00:24:04.374 --rc geninfo_all_blocks=1 00:24:04.374 --rc geninfo_unexecuted_blocks=1 00:24:04.374 00:24:04.374 ' 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:04.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:24:04.374 14:22:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:12.515 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:12.515 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:12.516 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:12.516 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:12.516 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:12.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:12.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:24:12.516 00:24:12.516 --- 10.0.0.2 ping statistics --- 00:24:12.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.516 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:12.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:12.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:24:12.516 00:24:12.516 --- 10.0.0.1 ping statistics --- 00:24:12.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.516 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3440582 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3440582 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3440582 ']' 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.516 14:22:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:12.516 [2024-11-25 14:22:16.921201] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:24:12.516 [2024-11-25 14:22:16.921277] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:24:12.516 [2024-11-25 14:22:17.027011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:12.516 [2024-11-25 14:22:17.086780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.516 [2024-11-25 14:22:17.086827] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.516 [2024-11-25 14:22:17.086836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.516 [2024-11-25 14:22:17.086847] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.516 [2024-11-25 14:22:17.086853] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.516 [2024-11-25 14:22:17.088370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:12.516 [2024-11-25 14:22:17.088534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:24:12.517 [2024-11-25 14:22:17.088693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:12.517 [2024-11-25 14:22:17.088693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:12.778 [2024-11-25 14:22:17.800190] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:12.778 Malloc0 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:12.778 [2024-11-25 14:22:17.854130] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:12.778 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:12.778 { 00:24:12.778 "params": { 00:24:12.779 "name": "Nvme$subsystem", 00:24:12.779 "trtype": "$TEST_TRANSPORT", 00:24:12.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.779 "adrfam": "ipv4", 00:24:12.779 "trsvcid": "$NVMF_PORT", 00:24:12.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.779 "hdgst": ${hdgst:-false}, 00:24:12.779 "ddgst": ${ddgst:-false} 00:24:12.779 }, 00:24:12.779 "method": "bdev_nvme_attach_controller" 00:24:12.779 } 00:24:12.779 EOF 00:24:12.779 )") 00:24:12.779 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:24:13.040 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:24:13.040 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:24:13.040 14:22:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:13.040 "params": { 00:24:13.040 "name": "Nvme1", 00:24:13.040 "trtype": "tcp", 00:24:13.040 "traddr": "10.0.0.2", 00:24:13.040 "adrfam": "ipv4", 00:24:13.040 "trsvcid": "4420", 00:24:13.040 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.040 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:13.040 "hdgst": false, 00:24:13.040 "ddgst": false 00:24:13.040 }, 00:24:13.040 "method": "bdev_nvme_attach_controller" 00:24:13.040 }' 00:24:13.040 [2024-11-25 14:22:17.913441] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:24:13.040 [2024-11-25 14:22:17.913523] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3440788 ] 00:24:13.040 [2024-11-25 14:22:18.007999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:13.040 [2024-11-25 14:22:18.068970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.040 [2024-11-25 14:22:18.069133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.040 [2024-11-25 14:22:18.069133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.302 I/O targets: 00:24:13.302 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:24:13.302 00:24:13.302 00:24:13.302 CUnit - A unit testing framework for C - Version 2.1-3 00:24:13.302 http://cunit.sourceforge.net/ 00:24:13.302 00:24:13.302 00:24:13.302 Suite: bdevio tests on: Nvme1n1 00:24:13.563 Test: blockdev write read block ...passed 00:24:13.563 Test: blockdev write zeroes read block ...passed 00:24:13.563 Test: blockdev write zeroes read no split ...passed 00:24:13.563 Test: blockdev write zeroes read split ...passed 00:24:13.563 Test: blockdev write zeroes read split partial ...passed 00:24:13.563 Test: blockdev reset ...[2024-11-25 14:22:18.549913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:13.563 [2024-11-25 14:22:18.550010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x176f980 (9): Bad file descriptor 00:24:13.563 [2024-11-25 14:22:18.604066] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:24:13.563 passed 00:24:13.563 Test: blockdev write read 8 blocks ...passed 00:24:13.563 Test: blockdev write read size > 128k ...passed 00:24:13.563 Test: blockdev write read invalid size ...passed 00:24:13.823 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:13.823 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:13.823 Test: blockdev write read max offset ...passed 00:24:13.823 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:13.823 Test: blockdev writev readv 8 blocks ...passed 00:24:13.823 Test: blockdev writev readv 30 x 1block ...passed 00:24:13.823 Test: blockdev writev readv block ...passed 00:24:13.823 Test: blockdev writev readv size > 128k ...passed 00:24:13.824 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:13.824 Test: blockdev comparev and writev ...[2024-11-25 14:22:18.865581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:13.824 [2024-11-25 14:22:18.865639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:13.824 [2024-11-25 14:22:18.865657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:13.824 [2024-11-25 14:22:18.865666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.824 [2024-11-25 14:22:18.866096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:13.824 [2024-11-25 14:22:18.866108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:13.824 [2024-11-25 14:22:18.866122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:13.824 [2024-11-25 14:22:18.866130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:13.824 [2024-11-25 14:22:18.866562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:13.824 [2024-11-25 14:22:18.866575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:13.824 [2024-11-25 14:22:18.866589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:13.824 [2024-11-25 14:22:18.866597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:13.824 [2024-11-25 14:22:18.867050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:13.824 [2024-11-25 14:22:18.867063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:13.824 [2024-11-25 14:22:18.867077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:13.824 [2024-11-25 14:22:18.867086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:13.824 passed 00:24:14.085 Test: blockdev nvme passthru rw ...passed 00:24:14.085 Test: blockdev nvme passthru vendor specific ...[2024-11-25 14:22:18.951769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:14.085 [2024-11-25 14:22:18.951787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:14.085 [2024-11-25 14:22:18.952019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:14.085 [2024-11-25 14:22:18.952030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:14.085 [2024-11-25 14:22:18.952306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:14.085 [2024-11-25 14:22:18.952317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:14.085 [2024-11-25 14:22:18.952584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:14.085 [2024-11-25 14:22:18.952594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:14.085 passed 00:24:14.085 Test: blockdev nvme admin passthru ...passed 00:24:14.085 Test: blockdev copy ...passed 00:24:14.085 00:24:14.085 Run Summary: Type Total Ran Passed Failed Inactive 00:24:14.085 suites 1 1 n/a 0 0 00:24:14.085 tests 23 23 23 0 0 00:24:14.085 asserts 152 152 152 0 n/a 00:24:14.085 00:24:14.085 Elapsed time = 1.286 seconds 00:24:14.345 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:14.345 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.345 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:14.345 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.345 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:24:14.345 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:24:14.345 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:14.345 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:24:14.345 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:14.345 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:24:14.345 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:14.345 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:14.345 rmmod nvme_tcp 00:24:14.345 rmmod nvme_fabrics 00:24:14.345 rmmod nvme_keyring 00:24:14.345 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:14.345 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:24:14.345 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:24:14.345 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3440582 ']' 00:24:14.345 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3440582 00:24:14.345 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3440582 ']' 00:24:14.345 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3440582 00:24:14.345 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:24:14.345 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:14.345 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3440582 00:24:14.605 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:24:14.605 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:24:14.605 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3440582' 00:24:14.605 killing process with pid 3440582 00:24:14.605 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3440582 00:24:14.605 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3440582 00:24:14.865 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:14.865 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:14.865 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:14.865 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:24:14.865 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:24:14.865 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:14.865 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:24:14.865 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:14.865 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:14.865 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.865 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.865 14:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.778 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:16.778 00:24:16.778 real 0m12.792s 00:24:16.778 user 0m15.349s 00:24:16.778 sys 0m6.815s 00:24:16.778 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:16.778 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:16.778 ************************************ 00:24:16.778 END TEST nvmf_bdevio_no_huge 00:24:16.778 ************************************ 00:24:17.038 14:22:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:17.038 14:22:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:17.038 14:22:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:17.038 14:22:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:17.038 ************************************ 00:24:17.038 START TEST nvmf_tls 00:24:17.038 ************************************ 00:24:17.038 14:22:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:17.038 * Looking for test storage... 00:24:17.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:17.038 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:17.038 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:24:17.038 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:17.038 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:17.038 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:17.038 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:17.038 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:17.038 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:24:17.038 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:24:17.038 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:24:17.038 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:24:17.300 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:24:17.300 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:24:17.300 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:24:17.300 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:17.300 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:24:17.300 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:24:17.300 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:17.300 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:17.300 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:24:17.300 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:24:17.300 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:17.300 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:24:17.300 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:24:17.300 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:24:17.300 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:24:17.300 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:17.300 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:24:17.300 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:24:17.300 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:17.300 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:17.300 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:24:17.300 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:17.300 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:17.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.300 --rc genhtml_branch_coverage=1 00:24:17.300 --rc genhtml_function_coverage=1 00:24:17.300 --rc genhtml_legend=1 00:24:17.300 --rc geninfo_all_blocks=1 00:24:17.300 --rc geninfo_unexecuted_blocks=1 00:24:17.300 00:24:17.300 ' 00:24:17.300 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:17.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.300 --rc genhtml_branch_coverage=1 00:24:17.300 --rc genhtml_function_coverage=1 00:24:17.300 --rc genhtml_legend=1 00:24:17.300 --rc geninfo_all_blocks=1 00:24:17.300 --rc geninfo_unexecuted_blocks=1 00:24:17.300 00:24:17.300 ' 00:24:17.300 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:17.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.300 --rc genhtml_branch_coverage=1 00:24:17.300 --rc genhtml_function_coverage=1 00:24:17.300 --rc genhtml_legend=1 00:24:17.300 --rc geninfo_all_blocks=1 00:24:17.301 --rc geninfo_unexecuted_blocks=1 00:24:17.301 00:24:17.301 ' 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:17.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.301 --rc genhtml_branch_coverage=1 00:24:17.301 --rc genhtml_function_coverage=1 00:24:17.301 --rc genhtml_legend=1 00:24:17.301 --rc geninfo_all_blocks=1 00:24:17.301 --rc geninfo_unexecuted_blocks=1 00:24:17.301 00:24:17.301 ' 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:17.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:24:17.301 14:22:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:25.444 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:25.444 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:25.444 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:25.444 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:25.444 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:25.445 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:25.445 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:25.445 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:25.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:25.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:24:25.445 00:24:25.445 --- 10.0.0.2 ping statistics --- 00:24:25.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.445 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:24:25.445 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:25.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:25.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:24:25.445 00:24:25.445 --- 10.0.0.1 ping statistics --- 00:24:25.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.445 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:24:25.445 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:25.445 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:24:25.445 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:25.445 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:25.445 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:25.445 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:25.445 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:25.445 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:25.445 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:25.445 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:24:25.445 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:25.445 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:25.445 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.445 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3445282 00:24:25.445 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3445282 00:24:25.445 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:24:25.445 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3445282 ']' 00:24:25.445 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.445 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:25.445 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.445 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:25.445 14:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.445 [2024-11-25 14:22:29.726740] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:24:25.445 [2024-11-25 14:22:29.726806] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:25.445 [2024-11-25 14:22:29.826368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.445 [2024-11-25 14:22:29.876788] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:25.445 [2024-11-25 14:22:29.876837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:25.445 [2024-11-25 14:22:29.876846] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:25.445 [2024-11-25 14:22:29.876853] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:25.445 [2024-11-25 14:22:29.876859] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:25.445 [2024-11-25 14:22:29.877682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:25.704 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:25.704 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:25.704 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:25.704 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:25.704 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.704 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.704 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:24:25.704 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:24:25.704 true 00:24:25.704 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:25.704 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:24:25.964 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:24:25.964 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:24:25.964 14:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:26.224 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:26.224 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:24:26.484 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:24:26.484 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:24:26.484 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:24:26.484 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:26.484 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:24:26.745 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:24:26.745 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:24:26.745 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:26.745 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:24:27.005 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:24:27.005 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:24:27.005 14:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:24:27.005 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:27.005 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:24:27.265 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:24:27.265 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:24:27.265 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:24:27.265 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:27.265 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:24:27.525 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:24:27.525 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:24:27.525 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:24:27.525 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:24:27.525 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:24:27.525 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:27.525 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:24:27.525 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:24:27.525 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:24:27.525 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:27.525 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:24:27.525 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:24:27.525 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:24:27.525 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:27.525 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:24:27.525 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:24:27.525 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:24:27.525 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:27.525 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:24:27.525 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.hTYH1HDqOX 00:24:27.526 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:24:27.526 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.jc67cHvVCq 00:24:27.526 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:27.526 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:27.526 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.hTYH1HDqOX 00:24:27.526 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.jc67cHvVCq 00:24:27.526 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:27.786 14:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:24:28.046 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.hTYH1HDqOX 00:24:28.046 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.hTYH1HDqOX 00:24:28.046 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:28.306 [2024-11-25 14:22:33.158993] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.306 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:28.306 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:28.566 [2024-11-25 14:22:33.491799] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:28.566 [2024-11-25 14:22:33.491986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:28.566 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:28.566 malloc0 00:24:28.826 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:28.826 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.hTYH1HDqOX 00:24:29.087 14:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:29.087 14:22:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.hTYH1HDqOX 00:24:39.242 Initializing NVMe Controllers 00:24:39.242 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:39.242 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:39.242 Initialization complete. Launching workers. 00:24:39.242 ======================================================== 00:24:39.242 Latency(us) 00:24:39.242 Device Information : IOPS MiB/s Average min max 00:24:39.242 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18757.59 73.27 3412.12 1150.31 4774.69 00:24:39.242 ======================================================== 00:24:39.242 Total : 18757.59 73.27 3412.12 1150.31 4774.69 00:24:39.242 00:24:39.242 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hTYH1HDqOX 00:24:39.242 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:39.242 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:39.242 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:39.242 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hTYH1HDqOX 00:24:39.242 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:39.242 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3448113 00:24:39.242 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:39.242 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3448113 /var/tmp/bdevperf.sock 00:24:39.242 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:39.242 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3448113 ']' 00:24:39.242 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:39.242 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.242 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:39.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:39.242 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.242 14:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.242 [2024-11-25 14:22:44.319688] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:24:39.243 [2024-11-25 14:22:44.319744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3448113 ] 00:24:39.504 [2024-11-25 14:22:44.406577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.504 [2024-11-25 14:22:44.442518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:40.074 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:40.074 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:40.075 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hTYH1HDqOX 00:24:40.334 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:40.334 [2024-11-25 14:22:45.413930] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:40.594 TLSTESTn1 00:24:40.594 14:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:40.594 Running I/O for 10 seconds... 00:24:42.923 5021.00 IOPS, 19.61 MiB/s [2024-11-25T13:22:48.957Z] 5139.50 IOPS, 20.08 MiB/s [2024-11-25T13:22:49.900Z] 5405.67 IOPS, 21.12 MiB/s [2024-11-25T13:22:50.842Z] 5393.50 IOPS, 21.07 MiB/s [2024-11-25T13:22:51.784Z] 5386.40 IOPS, 21.04 MiB/s [2024-11-25T13:22:52.726Z] 5383.33 IOPS, 21.03 MiB/s [2024-11-25T13:22:53.670Z] 5409.71 IOPS, 21.13 MiB/s [2024-11-25T13:22:55.057Z] 5454.38 IOPS, 21.31 MiB/s [2024-11-25T13:22:55.628Z] 5470.78 IOPS, 21.37 MiB/s [2024-11-25T13:22:55.898Z] 5541.50 IOPS, 21.65 MiB/s 00:24:50.808 Latency(us) 00:24:50.808 [2024-11-25T13:22:55.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.808 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:50.808 Verification LBA range: start 0x0 length 0x2000 00:24:50.808 TLSTESTn1 : 10.01 5547.00 21.67 0.00 0.00 23041.57 5024.43 26651.31 00:24:50.808 [2024-11-25T13:22:55.898Z] =================================================================================================================== 00:24:50.808 [2024-11-25T13:22:55.898Z] Total : 5547.00 21.67 0.00 0.00 23041.57 5024.43 26651.31 00:24:50.808 { 00:24:50.808 "results": [ 00:24:50.808 { 00:24:50.808 "job": "TLSTESTn1", 00:24:50.808 "core_mask": "0x4", 00:24:50.808 "workload": "verify", 00:24:50.808 "status": "finished", 00:24:50.808 "verify_range": { 00:24:50.808 "start": 0, 00:24:50.808 "length": 8192 00:24:50.808 }, 00:24:50.808 "queue_depth": 128, 00:24:50.808 "io_size": 4096, 00:24:50.808 "runtime": 10.012975, 00:24:50.808 "iops": 5547.002763913822, 00:24:50.808 "mibps": 21.667979546538366, 00:24:50.808 "io_failed": 0, 00:24:50.808 "io_timeout": 0, 00:24:50.808 "avg_latency_us": 23041.5742311524, 00:24:50.808 "min_latency_us": 5024.426666666666, 00:24:50.808 "max_latency_us": 26651.306666666667 00:24:50.808 } 00:24:50.808 ], 00:24:50.808 "core_count": 1 00:24:50.808 } 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3448113 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3448113 ']' 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3448113 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3448113 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3448113' 00:24:50.808 killing process with pid 3448113 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3448113 00:24:50.808 Received shutdown signal, test time was about 10.000000 seconds 00:24:50.808 00:24:50.808 Latency(us) 00:24:50.808 [2024-11-25T13:22:55.898Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.808 [2024-11-25T13:22:55.898Z] =================================================================================================================== 00:24:50.808 [2024-11-25T13:22:55.898Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3448113 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jc67cHvVCq 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jc67cHvVCq 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jc67cHvVCq 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.jc67cHvVCq 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3450371 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3450371 /var/tmp/bdevperf.sock 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3450371 ']' 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:50.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:50.808 14:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:50.808 [2024-11-25 14:22:55.882068] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:24:50.808 [2024-11-25 14:22:55.882128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3450371 ] 00:24:51.069 [2024-11-25 14:22:55.966640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.069 [2024-11-25 14:22:55.995307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:51.640 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:51.640 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:51.640 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jc67cHvVCq 00:24:51.901 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:51.901 [2024-11-25 14:22:56.961614] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:51.901 [2024-11-25 14:22:56.966233] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:51.901 [2024-11-25 14:22:56.966857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b9c20 (107): Transport endpoint is not connected 00:24:51.901 [2024-11-25 14:22:56.967852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b9c20 (9): Bad file descriptor 00:24:51.901 [2024-11-25 14:22:56.968854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:24:51.901 [2024-11-25 14:22:56.968861] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:51.901 [2024-11-25 14:22:56.968867] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:51.901 [2024-11-25 14:22:56.968875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:24:51.901 request: 00:24:51.901 { 00:24:51.901 "name": "TLSTEST", 00:24:51.901 "trtype": "tcp", 00:24:51.901 "traddr": "10.0.0.2", 00:24:51.901 "adrfam": "ipv4", 00:24:51.901 "trsvcid": "4420", 00:24:51.901 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.901 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:51.901 "prchk_reftag": false, 00:24:51.901 "prchk_guard": false, 00:24:51.901 "hdgst": false, 00:24:51.901 "ddgst": false, 00:24:51.901 "psk": "key0", 00:24:51.901 "allow_unrecognized_csi": false, 00:24:51.901 "method": "bdev_nvme_attach_controller", 00:24:51.901 "req_id": 1 00:24:51.901 } 00:24:51.901 Got JSON-RPC error response 00:24:51.901 response: 00:24:51.901 { 00:24:51.901 "code": -5, 00:24:51.901 "message": "Input/output error" 00:24:51.901 } 00:24:51.901 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3450371 00:24:51.901 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3450371 ']' 00:24:51.901 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3450371 00:24:51.901 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:51.901 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:52.163 14:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3450371 00:24:52.163 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:52.163 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:52.163 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3450371' 00:24:52.163 killing process with pid 3450371 00:24:52.163 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3450371 00:24:52.163 Received shutdown signal, test time was about 10.000000 seconds 00:24:52.163 00:24:52.163 Latency(us) 00:24:52.163 [2024-11-25T13:22:57.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:52.163 [2024-11-25T13:22:57.253Z] =================================================================================================================== 00:24:52.163 [2024-11-25T13:22:57.253Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:52.163 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3450371 00:24:52.163 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:52.163 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:52.163 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:52.163 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:52.163 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:52.163 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hTYH1HDqOX 00:24:52.163 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:52.163 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hTYH1HDqOX 00:24:52.163 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:52.163 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:52.163 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:52.163 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:52.163 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hTYH1HDqOX 00:24:52.163 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:52.163 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:52.163 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:52.163 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hTYH1HDqOX 00:24:52.163 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:52.163 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3450708 00:24:52.164 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:52.164 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3450708 /var/tmp/bdevperf.sock 00:24:52.164 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:52.164 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3450708 ']' 00:24:52.164 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:52.164 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:52.164 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:52.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:52.164 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:52.164 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:52.164 [2024-11-25 14:22:57.195811] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:24:52.164 [2024-11-25 14:22:57.195866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3450708 ] 00:24:52.425 [2024-11-25 14:22:57.280728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.425 [2024-11-25 14:22:57.308629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:52.996 14:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:52.996 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:52.996 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hTYH1HDqOX 00:24:53.257 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:24:53.257 [2024-11-25 14:22:58.335149] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:53.257 [2024-11-25 14:22:58.339847] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:53.257 [2024-11-25 14:22:58.339866] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:53.257 [2024-11-25 14:22:58.339886] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:53.257 [2024-11-25 14:22:58.340536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6bc20 (107): Transport endpoint is not connected 00:24:53.257 [2024-11-25 14:22:58.341530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6bc20 (9): Bad file descriptor 00:24:53.257 [2024-11-25 14:22:58.342532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:24:53.257 [2024-11-25 14:22:58.342539] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:53.257 [2024-11-25 14:22:58.342545] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:53.257 [2024-11-25 14:22:58.342553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:24:53.257 request: 00:24:53.257 { 00:24:53.257 "name": "TLSTEST", 00:24:53.257 "trtype": "tcp", 00:24:53.257 "traddr": "10.0.0.2", 00:24:53.257 "adrfam": "ipv4", 00:24:53.257 "trsvcid": "4420", 00:24:53.257 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:53.257 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:53.257 "prchk_reftag": false, 00:24:53.257 "prchk_guard": false, 00:24:53.257 "hdgst": false, 00:24:53.257 "ddgst": false, 00:24:53.257 "psk": "key0", 00:24:53.257 "allow_unrecognized_csi": false, 00:24:53.257 "method": "bdev_nvme_attach_controller", 00:24:53.257 "req_id": 1 00:24:53.257 } 00:24:53.257 Got JSON-RPC error response 00:24:53.257 response: 00:24:53.257 { 00:24:53.257 "code": -5, 00:24:53.257 "message": "Input/output error" 00:24:53.257 } 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3450708 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3450708 ']' 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3450708 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3450708 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3450708' 00:24:53.519 killing process with pid 3450708 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3450708 00:24:53.519 Received shutdown signal, test time was about 10.000000 seconds 00:24:53.519 00:24:53.519 Latency(us) 00:24:53.519 [2024-11-25T13:22:58.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.519 [2024-11-25T13:22:58.609Z] =================================================================================================================== 00:24:53.519 [2024-11-25T13:22:58.609Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3450708 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hTYH1HDqOX 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hTYH1HDqOX 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hTYH1HDqOX 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.hTYH1HDqOX 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3451005 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3451005 /var/tmp/bdevperf.sock 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3451005 ']' 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:53.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:53.519 14:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:53.519 [2024-11-25 14:22:58.583660] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:24:53.519 [2024-11-25 14:22:58.583716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3451005 ] 00:24:53.779 [2024-11-25 14:22:58.666883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.779 [2024-11-25 14:22:58.695589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:54.350 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.350 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:54.350 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.hTYH1HDqOX 00:24:54.612 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:54.874 [2024-11-25 14:22:59.705859] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:54.874 [2024-11-25 14:22:59.710578] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:54.874 [2024-11-25 14:22:59.710594] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:54.874 [2024-11-25 14:22:59.710613] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:54.874 [2024-11-25 14:22:59.711002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a4c20 (107): Transport endpoint is not connected 00:24:54.874 [2024-11-25 14:22:59.711998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a4c20 (9): Bad file descriptor 00:24:54.874 [2024-11-25 14:22:59.713000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:24:54.874 [2024-11-25 14:22:59.713007] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:54.874 [2024-11-25 14:22:59.713013] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:24:54.874 [2024-11-25 14:22:59.713021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:24:54.874 request: 00:24:54.874 { 00:24:54.874 "name": "TLSTEST", 00:24:54.874 "trtype": "tcp", 00:24:54.874 "traddr": "10.0.0.2", 00:24:54.874 "adrfam": "ipv4", 00:24:54.874 "trsvcid": "4420", 00:24:54.874 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:54.874 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:54.874 "prchk_reftag": false, 00:24:54.874 "prchk_guard": false, 00:24:54.874 "hdgst": false, 00:24:54.874 "ddgst": false, 00:24:54.874 "psk": "key0", 00:24:54.874 "allow_unrecognized_csi": false, 00:24:54.874 "method": "bdev_nvme_attach_controller", 00:24:54.874 "req_id": 1 00:24:54.874 } 00:24:54.874 Got JSON-RPC error response 00:24:54.874 response: 00:24:54.874 { 00:24:54.874 "code": -5, 00:24:54.874 "message": "Input/output error" 00:24:54.874 } 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3451005 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3451005 ']' 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3451005 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3451005 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3451005' 00:24:54.874 killing process with pid 3451005 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3451005 00:24:54.874 Received shutdown signal, test time was about 10.000000 seconds 00:24:54.874 00:24:54.874 Latency(us) 00:24:54.874 [2024-11-25T13:22:59.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.874 [2024-11-25T13:22:59.964Z] =================================================================================================================== 00:24:54.874 [2024-11-25T13:22:59.964Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3451005 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3451178 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3451178 /var/tmp/bdevperf.sock 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3451178 ']' 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:54.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:54.874 14:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:54.874 [2024-11-25 14:22:59.956832] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:24:54.874 [2024-11-25 14:22:59.956888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3451178 ] 00:24:55.135 [2024-11-25 14:23:00.043415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.135 [2024-11-25 14:23:00.072305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.706 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:55.706 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:55.706 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:24:55.967 [2024-11-25 14:23:00.907035] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:24:55.967 [2024-11-25 14:23:00.907062] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:55.967 request: 00:24:55.967 { 00:24:55.967 "name": "key0", 00:24:55.967 "path": "", 00:24:55.967 "method": "keyring_file_add_key", 00:24:55.967 "req_id": 1 00:24:55.967 } 00:24:55.967 Got JSON-RPC error response 00:24:55.967 response: 00:24:55.967 { 00:24:55.967 "code": -1, 00:24:55.967 "message": "Operation not permitted" 00:24:55.967 } 00:24:55.967 14:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:56.226 [2024-11-25 14:23:01.083564] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:56.226 [2024-11-25 14:23:01.083587] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:56.226 request: 00:24:56.226 { 00:24:56.226 "name": "TLSTEST", 00:24:56.226 "trtype": "tcp", 00:24:56.226 "traddr": "10.0.0.2", 00:24:56.226 "adrfam": "ipv4", 00:24:56.226 "trsvcid": "4420", 00:24:56.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:56.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:56.226 "prchk_reftag": false, 00:24:56.226 "prchk_guard": false, 00:24:56.226 "hdgst": false, 00:24:56.226 "ddgst": false, 00:24:56.226 "psk": "key0", 00:24:56.226 "allow_unrecognized_csi": false, 00:24:56.226 "method": "bdev_nvme_attach_controller", 00:24:56.226 "req_id": 1 00:24:56.226 } 00:24:56.226 Got JSON-RPC error response 00:24:56.226 response: 00:24:56.226 { 00:24:56.226 "code": -126, 00:24:56.226 "message": "Required key not available" 00:24:56.226 } 00:24:56.226 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3451178 00:24:56.226 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3451178 ']' 00:24:56.226 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3451178 00:24:56.226 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:56.226 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:56.226 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3451178 00:24:56.226 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:56.226 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:56.226 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3451178' 00:24:56.226 killing process with pid 3451178 00:24:56.226 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3451178 00:24:56.226 Received shutdown signal, test time was about 10.000000 seconds 00:24:56.226 00:24:56.226 Latency(us) 00:24:56.226 [2024-11-25T13:23:01.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.226 [2024-11-25T13:23:01.316Z] =================================================================================================================== 00:24:56.226 [2024-11-25T13:23:01.316Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:56.226 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3451178 00:24:56.227 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:56.227 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:56.227 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:56.227 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:56.227 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:56.227 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3445282 00:24:56.227 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3445282 ']' 00:24:56.227 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3445282 00:24:56.227 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:56.227 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:56.227 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3445282 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3445282' 00:24:56.487 killing process with pid 3445282 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3445282 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3445282 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.92hhUuDJ2e 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.92hhUuDJ2e 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3451460 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3451460 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3451460 ']' 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:56.487 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.488 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:56.488 14:23:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:56.488 [2024-11-25 14:23:01.565058] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:24:56.488 [2024-11-25 14:23:01.565126] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.748 [2024-11-25 14:23:01.673621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.748 [2024-11-25 14:23:01.703957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.748 [2024-11-25 14:23:01.703988] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.748 [2024-11-25 14:23:01.703994] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:56.748 [2024-11-25 14:23:01.703999] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:56.748 [2024-11-25 14:23:01.704003] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.748 [2024-11-25 14:23:01.704462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.319 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:57.319 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:57.319 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:57.319 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:57.319 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:57.319 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:57.319 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.92hhUuDJ2e 00:24:57.319 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.92hhUuDJ2e 00:24:57.319 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:57.579 [2024-11-25 14:23:02.532820] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.579 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:57.839 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:57.839 [2024-11-25 14:23:02.869652] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:57.839 [2024-11-25 14:23:02.869847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.839 14:23:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:58.100 malloc0 00:24:58.100 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:58.360 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.92hhUuDJ2e 00:24:58.360 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:58.621 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.92hhUuDJ2e 00:24:58.621 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:58.621 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:58.621 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:58.621 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.92hhUuDJ2e 00:24:58.621 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:58.621 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3451961 00:24:58.621 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:58.621 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3451961 /var/tmp/bdevperf.sock 00:24:58.621 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:58.622 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3451961 ']' 00:24:58.622 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:58.622 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:58.622 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:58.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:58.622 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:58.622 14:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:58.622 [2024-11-25 14:23:03.602825] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:24:58.622 [2024-11-25 14:23:03.602879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3451961 ] 00:24:58.622 [2024-11-25 14:23:03.687806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.882 [2024-11-25 14:23:03.717068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:59.453 14:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:59.453 14:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:59.453 14:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.92hhUuDJ2e 00:24:59.714 14:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:59.714 [2024-11-25 14:23:04.691876] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:59.714 TLSTESTn1 00:24:59.714 14:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:59.974 Running I/O for 10 seconds... 00:25:01.856 5391.00 IOPS, 21.06 MiB/s [2024-11-25T13:23:07.888Z] 4740.50 IOPS, 18.52 MiB/s [2024-11-25T13:23:09.271Z] 5145.67 IOPS, 20.10 MiB/s [2024-11-25T13:23:10.212Z] 5126.75 IOPS, 20.03 MiB/s [2024-11-25T13:23:11.152Z] 5026.80 IOPS, 19.64 MiB/s [2024-11-25T13:23:12.092Z] 5173.33 IOPS, 20.21 MiB/s [2024-11-25T13:23:13.035Z] 5331.57 IOPS, 20.83 MiB/s [2024-11-25T13:23:13.979Z] 5398.75 IOPS, 21.09 MiB/s [2024-11-25T13:23:14.920Z] 5319.56 IOPS, 20.78 MiB/s [2024-11-25T13:23:14.920Z] 5295.60 IOPS, 20.69 MiB/s 00:25:09.830 Latency(us) 00:25:09.830 [2024-11-25T13:23:14.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.830 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:09.830 Verification LBA range: start 0x0 length 0x2000 00:25:09.830 TLSTESTn1 : 10.02 5297.11 20.69 0.00 0.00 24123.38 6307.84 25668.27 00:25:09.830 [2024-11-25T13:23:14.920Z] =================================================================================================================== 00:25:09.830 [2024-11-25T13:23:14.920Z] Total : 5297.11 20.69 0.00 0.00 24123.38 6307.84 25668.27 00:25:09.830 { 00:25:09.830 "results": [ 00:25:09.830 { 00:25:09.830 "job": "TLSTESTn1", 00:25:09.830 "core_mask": "0x4", 00:25:09.830 "workload": "verify", 00:25:09.830 "status": "finished", 00:25:09.830 "verify_range": { 00:25:09.830 "start": 0, 00:25:09.830 "length": 8192 00:25:09.830 }, 00:25:09.830 "queue_depth": 128, 00:25:09.830 "io_size": 4096, 00:25:09.830 "runtime": 10.021128, 00:25:09.830 "iops": 5297.108269647888, 00:25:09.830 "mibps": 20.691829178312062, 00:25:09.830 "io_failed": 0, 00:25:09.830 "io_timeout": 0, 00:25:09.830 "avg_latency_us": 24123.378998172677, 00:25:09.830 "min_latency_us": 6307.84, 00:25:09.830 "max_latency_us": 25668.266666666666 00:25:09.830 } 00:25:09.830 ], 00:25:09.830 "core_count": 1 00:25:09.830 } 00:25:10.091 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:10.091 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3451961 00:25:10.091 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3451961 ']' 00:25:10.091 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3451961 00:25:10.091 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:10.091 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:10.091 14:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3451961 00:25:10.091 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:10.091 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:10.091 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3451961' 00:25:10.091 killing process with pid 3451961 00:25:10.091 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3451961 00:25:10.091 Received shutdown signal, test time was about 10.000000 seconds 00:25:10.091 00:25:10.091 Latency(us) 00:25:10.091 [2024-11-25T13:23:15.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.091 [2024-11-25T13:23:15.181Z] =================================================================================================================== 00:25:10.091 [2024-11-25T13:23:15.181Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:10.091 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3451961 00:25:10.091 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.92hhUuDJ2e 00:25:10.091 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.92hhUuDJ2e 00:25:10.091 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:25:10.091 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.92hhUuDJ2e 00:25:10.091 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:25:10.091 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:10.091 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:25:10.091 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:10.091 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.92hhUuDJ2e 00:25:10.091 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:10.091 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:10.092 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:10.092 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.92hhUuDJ2e 00:25:10.092 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:10.092 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3454136 00:25:10.092 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:10.092 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3454136 /var/tmp/bdevperf.sock 00:25:10.092 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:10.092 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3454136 ']' 00:25:10.092 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:10.092 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:10.092 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:10.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:10.092 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:10.092 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:10.092 [2024-11-25 14:23:15.164828] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:25:10.092 [2024-11-25 14:23:15.164890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3454136 ] 00:25:10.352 [2024-11-25 14:23:15.246643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.352 [2024-11-25 14:23:15.275250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:10.923 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:10.923 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:10.923 14:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.92hhUuDJ2e 00:25:11.184 [2024-11-25 14:23:16.105178] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.92hhUuDJ2e': 0100666 00:25:11.184 [2024-11-25 14:23:16.105200] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:11.184 request: 00:25:11.184 { 00:25:11.184 "name": "key0", 00:25:11.184 "path": "/tmp/tmp.92hhUuDJ2e", 00:25:11.184 "method": "keyring_file_add_key", 00:25:11.184 "req_id": 1 00:25:11.184 } 00:25:11.184 Got JSON-RPC error response 00:25:11.184 response: 00:25:11.184 { 00:25:11.184 "code": -1, 00:25:11.184 "message": "Operation not permitted" 00:25:11.184 } 00:25:11.184 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:11.445 [2024-11-25 14:23:16.273669] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:11.445 [2024-11-25 14:23:16.273693] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:25:11.445 request: 00:25:11.445 { 00:25:11.445 "name": "TLSTEST", 00:25:11.445 "trtype": "tcp", 00:25:11.445 "traddr": "10.0.0.2", 00:25:11.445 "adrfam": "ipv4", 00:25:11.445 "trsvcid": "4420", 00:25:11.445 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:11.445 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:11.445 "prchk_reftag": false, 00:25:11.445 "prchk_guard": false, 00:25:11.445 "hdgst": false, 00:25:11.445 "ddgst": false, 00:25:11.445 "psk": "key0", 00:25:11.445 "allow_unrecognized_csi": false, 00:25:11.445 "method": "bdev_nvme_attach_controller", 00:25:11.445 "req_id": 1 00:25:11.445 } 00:25:11.445 Got JSON-RPC error response 00:25:11.445 response: 00:25:11.445 { 00:25:11.445 "code": -126, 00:25:11.445 "message": "Required key not available" 00:25:11.445 } 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3454136 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3454136 ']' 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3454136 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3454136 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3454136' 00:25:11.445 killing process with pid 3454136 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3454136 00:25:11.445 Received shutdown signal, test time was about 10.000000 seconds 00:25:11.445 00:25:11.445 Latency(us) 00:25:11.445 [2024-11-25T13:23:16.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.445 [2024-11-25T13:23:16.535Z] =================================================================================================================== 00:25:11.445 [2024-11-25T13:23:16.535Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3454136 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3451460 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3451460 ']' 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3451460 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3451460 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3451460' 00:25:11.445 killing process with pid 3451460 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3451460 00:25:11.445 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3451460 00:25:11.705 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:25:11.705 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:11.705 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:11.705 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:11.705 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3454485 00:25:11.705 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3454485 00:25:11.705 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:11.705 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3454485 ']' 00:25:11.705 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.705 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:11.706 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.706 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:11.706 14:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:11.706 [2024-11-25 14:23:16.695125] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:25:11.706 [2024-11-25 14:23:16.695188] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.706 [2024-11-25 14:23:16.786452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.966 [2024-11-25 14:23:16.816130] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.966 [2024-11-25 14:23:16.816167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.966 [2024-11-25 14:23:16.816173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:11.966 [2024-11-25 14:23:16.816179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:11.966 [2024-11-25 14:23:16.816184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.966 [2024-11-25 14:23:16.816666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.537 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:12.537 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:12.537 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:12.537 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:12.537 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:12.537 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:12.537 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.92hhUuDJ2e 00:25:12.537 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:25:12.537 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.92hhUuDJ2e 00:25:12.537 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:25:12.537 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:12.537 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:25:12.537 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:12.537 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.92hhUuDJ2e 00:25:12.537 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.92hhUuDJ2e 00:25:12.537 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:12.797 [2024-11-25 14:23:17.696647] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.798 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:13.058 14:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:13.058 [2024-11-25 14:23:18.069565] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:13.058 [2024-11-25 14:23:18.069751] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:13.058 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:13.318 malloc0 00:25:13.318 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:13.578 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.92hhUuDJ2e 00:25:13.578 [2024-11-25 14:23:18.592308] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.92hhUuDJ2e': 0100666 00:25:13.578 [2024-11-25 14:23:18.592328] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:13.578 request: 00:25:13.578 { 00:25:13.578 "name": "key0", 00:25:13.578 "path": "/tmp/tmp.92hhUuDJ2e", 00:25:13.578 "method": "keyring_file_add_key", 00:25:13.578 "req_id": 1 00:25:13.578 } 00:25:13.578 Got JSON-RPC error response 00:25:13.578 response: 00:25:13.578 { 00:25:13.578 "code": -1, 00:25:13.578 "message": "Operation not permitted" 00:25:13.578 } 00:25:13.579 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:13.839 [2024-11-25 14:23:18.760744] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:25:13.839 [2024-11-25 14:23:18.760769] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:25:13.839 request: 00:25:13.839 { 00:25:13.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:13.839 "host": "nqn.2016-06.io.spdk:host1", 00:25:13.839 "psk": "key0", 00:25:13.839 "method": "nvmf_subsystem_add_host", 00:25:13.839 "req_id": 1 00:25:13.839 } 00:25:13.839 Got JSON-RPC error response 00:25:13.839 response: 00:25:13.839 { 00:25:13.839 "code": -32603, 00:25:13.839 "message": "Internal error" 00:25:13.839 } 00:25:13.839 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:25:13.839 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:13.839 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:13.839 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:13.839 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3454485 00:25:13.839 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3454485 ']' 00:25:13.839 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3454485 00:25:13.839 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:13.839 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:13.839 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3454485 00:25:13.839 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:13.839 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:13.839 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3454485' 00:25:13.839 killing process with pid 3454485 00:25:13.839 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3454485 00:25:13.839 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3454485 00:25:14.100 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.92hhUuDJ2e 00:25:14.100 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:25:14.100 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:14.100 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:14.100 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:14.100 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3454896 00:25:14.100 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:14.100 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3454896 00:25:14.100 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3454896 ']' 00:25:14.100 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.100 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:14.100 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.100 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:14.100 14:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:14.100 [2024-11-25 14:23:19.022384] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:25:14.100 [2024-11-25 14:23:19.022443] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.100 [2024-11-25 14:23:19.112361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.100 [2024-11-25 14:23:19.145570] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.100 [2024-11-25 14:23:19.145602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.100 [2024-11-25 14:23:19.145608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.100 [2024-11-25 14:23:19.145613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.100 [2024-11-25 14:23:19.145617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.100 [2024-11-25 14:23:19.146117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.042 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:15.042 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:15.042 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:15.042 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:15.042 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:15.042 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:15.042 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.92hhUuDJ2e 00:25:15.042 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.92hhUuDJ2e 00:25:15.042 14:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:15.042 [2024-11-25 14:23:20.019853] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.042 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:15.304 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:15.304 [2024-11-25 14:23:20.380732] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:15.304 [2024-11-25 14:23:20.380921] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.564 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:15.564 malloc0 00:25:15.564 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:15.824 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.92hhUuDJ2e 00:25:16.085 14:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:16.085 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3455452 00:25:16.085 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:16.085 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3455452 /var/tmp/bdevperf.sock 00:25:16.085 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3455452 ']' 00:25:16.085 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:16.085 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:16.085 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:16.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:16.085 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:16.085 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:16.085 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:16.346 [2024-11-25 14:23:21.177513] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:25:16.346 [2024-11-25 14:23:21.177568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3455452 ] 00:25:16.346 [2024-11-25 14:23:21.267185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.346 [2024-11-25 14:23:21.302462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:16.918 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.918 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:16.918 14:23:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.92hhUuDJ2e 00:25:17.179 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:17.440 [2024-11-25 14:23:22.313945] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:17.440 TLSTESTn1 00:25:17.440 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:25:17.702 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:25:17.702 "subsystems": [ 00:25:17.702 { 00:25:17.702 "subsystem": "keyring", 00:25:17.702 "config": [ 00:25:17.702 { 00:25:17.702 "method": "keyring_file_add_key", 00:25:17.702 "params": { 00:25:17.702 "name": "key0", 00:25:17.702 "path": "/tmp/tmp.92hhUuDJ2e" 00:25:17.702 } 00:25:17.702 } 00:25:17.702 ] 00:25:17.702 }, 00:25:17.702 { 00:25:17.702 "subsystem": "iobuf", 00:25:17.702 "config": [ 00:25:17.702 { 00:25:17.702 "method": "iobuf_set_options", 00:25:17.702 "params": { 00:25:17.702 "small_pool_count": 8192, 00:25:17.702 "large_pool_count": 1024, 00:25:17.702 "small_bufsize": 8192, 00:25:17.702 "large_bufsize": 135168, 00:25:17.702 "enable_numa": false 00:25:17.702 } 00:25:17.702 } 00:25:17.702 ] 00:25:17.702 }, 00:25:17.702 { 00:25:17.702 "subsystem": "sock", 00:25:17.702 "config": [ 00:25:17.702 { 00:25:17.702 "method": "sock_set_default_impl", 00:25:17.702 "params": { 00:25:17.702 "impl_name": "posix" 00:25:17.702 } 00:25:17.702 }, 00:25:17.702 { 00:25:17.702 "method": "sock_impl_set_options", 00:25:17.702 "params": { 00:25:17.702 "impl_name": "ssl", 00:25:17.702 "recv_buf_size": 4096, 00:25:17.702 "send_buf_size": 4096, 00:25:17.702 "enable_recv_pipe": true, 00:25:17.702 "enable_quickack": false, 00:25:17.702 "enable_placement_id": 0, 00:25:17.702 "enable_zerocopy_send_server": true, 00:25:17.702 "enable_zerocopy_send_client": false, 00:25:17.702 "zerocopy_threshold": 0, 00:25:17.702 "tls_version": 0, 00:25:17.702 "enable_ktls": false 00:25:17.702 } 00:25:17.702 }, 00:25:17.702 { 00:25:17.702 "method": "sock_impl_set_options", 00:25:17.702 "params": { 00:25:17.702 "impl_name": "posix", 00:25:17.702 "recv_buf_size": 2097152, 00:25:17.702 "send_buf_size": 2097152, 00:25:17.702 "enable_recv_pipe": true, 00:25:17.702 "enable_quickack": false, 00:25:17.702 "enable_placement_id": 0, 00:25:17.702 "enable_zerocopy_send_server": true, 00:25:17.702 "enable_zerocopy_send_client": false, 00:25:17.702 "zerocopy_threshold": 0, 00:25:17.702 "tls_version": 0, 00:25:17.702 "enable_ktls": false 00:25:17.702 } 00:25:17.702 } 00:25:17.702 ] 00:25:17.702 }, 00:25:17.702 { 00:25:17.702 "subsystem": "vmd", 00:25:17.702 "config": [] 00:25:17.702 }, 00:25:17.702 { 00:25:17.702 "subsystem": "accel", 00:25:17.702 "config": [ 00:25:17.702 { 00:25:17.702 "method": "accel_set_options", 00:25:17.702 "params": { 00:25:17.702 "small_cache_size": 128, 00:25:17.702 "large_cache_size": 16, 00:25:17.702 "task_count": 2048, 00:25:17.702 "sequence_count": 2048, 00:25:17.702 "buf_count": 2048 00:25:17.702 } 00:25:17.702 } 00:25:17.702 ] 00:25:17.702 }, 00:25:17.702 { 00:25:17.702 "subsystem": "bdev", 00:25:17.702 "config": [ 00:25:17.702 { 00:25:17.702 "method": "bdev_set_options", 00:25:17.702 "params": { 00:25:17.702 "bdev_io_pool_size": 65535, 00:25:17.702 "bdev_io_cache_size": 256, 00:25:17.702 "bdev_auto_examine": true, 00:25:17.702 "iobuf_small_cache_size": 128, 00:25:17.702 "iobuf_large_cache_size": 16 00:25:17.702 } 00:25:17.702 }, 00:25:17.702 { 00:25:17.702 "method": "bdev_raid_set_options", 00:25:17.702 "params": { 00:25:17.702 "process_window_size_kb": 1024, 00:25:17.702 "process_max_bandwidth_mb_sec": 0 00:25:17.702 } 00:25:17.702 }, 00:25:17.702 { 00:25:17.702 "method": "bdev_iscsi_set_options", 00:25:17.702 "params": { 00:25:17.702 "timeout_sec": 30 00:25:17.702 } 00:25:17.702 }, 00:25:17.702 { 00:25:17.702 "method": "bdev_nvme_set_options", 00:25:17.702 "params": { 00:25:17.702 "action_on_timeout": "none", 00:25:17.702 "timeout_us": 0, 00:25:17.702 "timeout_admin_us": 0, 00:25:17.702 "keep_alive_timeout_ms": 10000, 00:25:17.702 "arbitration_burst": 0, 00:25:17.702 "low_priority_weight": 0, 00:25:17.702 "medium_priority_weight": 0, 00:25:17.702 "high_priority_weight": 0, 00:25:17.702 "nvme_adminq_poll_period_us": 10000, 00:25:17.702 "nvme_ioq_poll_period_us": 0, 00:25:17.703 "io_queue_requests": 0, 00:25:17.703 "delay_cmd_submit": true, 00:25:17.703 "transport_retry_count": 4, 00:25:17.703 "bdev_retry_count": 3, 00:25:17.703 "transport_ack_timeout": 0, 00:25:17.703 "ctrlr_loss_timeout_sec": 0, 00:25:17.703 "reconnect_delay_sec": 0, 00:25:17.703 "fast_io_fail_timeout_sec": 0, 00:25:17.703 "disable_auto_failback": false, 00:25:17.703 "generate_uuids": false, 00:25:17.703 "transport_tos": 0, 00:25:17.703 "nvme_error_stat": false, 00:25:17.703 "rdma_srq_size": 0, 00:25:17.703 "io_path_stat": false, 00:25:17.703 "allow_accel_sequence": false, 00:25:17.703 "rdma_max_cq_size": 0, 00:25:17.703 "rdma_cm_event_timeout_ms": 0, 00:25:17.703 "dhchap_digests": [ 00:25:17.703 "sha256", 00:25:17.703 "sha384", 00:25:17.703 "sha512" 00:25:17.703 ], 00:25:17.703 "dhchap_dhgroups": [ 00:25:17.703 "null", 00:25:17.703 "ffdhe2048", 00:25:17.703 "ffdhe3072", 00:25:17.703 "ffdhe4096", 00:25:17.703 "ffdhe6144", 00:25:17.703 "ffdhe8192" 00:25:17.703 ] 00:25:17.703 } 00:25:17.703 }, 00:25:17.703 { 00:25:17.703 "method": "bdev_nvme_set_hotplug", 00:25:17.703 "params": { 00:25:17.703 "period_us": 100000, 00:25:17.703 "enable": false 00:25:17.703 } 00:25:17.703 }, 00:25:17.703 { 00:25:17.703 "method": "bdev_malloc_create", 00:25:17.703 "params": { 00:25:17.703 "name": "malloc0", 00:25:17.703 "num_blocks": 8192, 00:25:17.703 "block_size": 4096, 00:25:17.703 "physical_block_size": 4096, 00:25:17.703 "uuid": "0d0a7746-fe51-434d-ba0a-321f01a61b00", 00:25:17.703 "optimal_io_boundary": 0, 00:25:17.703 "md_size": 0, 00:25:17.703 "dif_type": 0, 00:25:17.703 "dif_is_head_of_md": false, 00:25:17.703 "dif_pi_format": 0 00:25:17.703 } 00:25:17.703 }, 00:25:17.703 { 00:25:17.703 "method": "bdev_wait_for_examine" 00:25:17.703 } 00:25:17.703 ] 00:25:17.703 }, 00:25:17.703 { 00:25:17.703 "subsystem": "nbd", 00:25:17.703 "config": [] 00:25:17.703 }, 00:25:17.703 { 00:25:17.703 "subsystem": "scheduler", 00:25:17.703 "config": [ 00:25:17.703 { 00:25:17.703 "method": "framework_set_scheduler", 00:25:17.703 "params": { 00:25:17.703 "name": "static" 00:25:17.703 } 00:25:17.703 } 00:25:17.703 ] 00:25:17.703 }, 00:25:17.703 { 00:25:17.703 "subsystem": "nvmf", 00:25:17.703 "config": [ 00:25:17.703 { 00:25:17.703 "method": "nvmf_set_config", 00:25:17.703 "params": { 00:25:17.703 "discovery_filter": "match_any", 00:25:17.703 "admin_cmd_passthru": { 00:25:17.703 "identify_ctrlr": false 00:25:17.703 }, 00:25:17.703 "dhchap_digests": [ 00:25:17.703 "sha256", 00:25:17.703 "sha384", 00:25:17.703 "sha512" 00:25:17.703 ], 00:25:17.703 "dhchap_dhgroups": [ 00:25:17.703 "null", 00:25:17.703 "ffdhe2048", 00:25:17.703 "ffdhe3072", 00:25:17.703 "ffdhe4096", 00:25:17.703 "ffdhe6144", 00:25:17.703 "ffdhe8192" 00:25:17.703 ] 00:25:17.703 } 00:25:17.703 }, 00:25:17.703 { 00:25:17.703 "method": "nvmf_set_max_subsystems", 00:25:17.703 "params": { 00:25:17.703 "max_subsystems": 1024 00:25:17.703 } 00:25:17.703 }, 00:25:17.703 { 00:25:17.703 "method": "nvmf_set_crdt", 00:25:17.703 "params": { 00:25:17.703 "crdt1": 0, 00:25:17.703 "crdt2": 0, 00:25:17.703 "crdt3": 0 00:25:17.703 } 00:25:17.703 }, 00:25:17.703 { 00:25:17.703 "method": "nvmf_create_transport", 00:25:17.703 "params": { 00:25:17.703 "trtype": "TCP", 00:25:17.703 "max_queue_depth": 128, 00:25:17.703 "max_io_qpairs_per_ctrlr": 127, 00:25:17.703 "in_capsule_data_size": 4096, 00:25:17.703 "max_io_size": 131072, 00:25:17.703 "io_unit_size": 131072, 00:25:17.703 "max_aq_depth": 128, 00:25:17.703 "num_shared_buffers": 511, 00:25:17.703 "buf_cache_size": 4294967295, 00:25:17.703 "dif_insert_or_strip": false, 00:25:17.703 "zcopy": false, 00:25:17.703 "c2h_success": false, 00:25:17.703 "sock_priority": 0, 00:25:17.703 "abort_timeout_sec": 1, 00:25:17.703 "ack_timeout": 0, 00:25:17.703 "data_wr_pool_size": 0 00:25:17.703 } 00:25:17.703 }, 00:25:17.703 { 00:25:17.703 "method": "nvmf_create_subsystem", 00:25:17.703 "params": { 00:25:17.703 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:17.703 "allow_any_host": false, 00:25:17.703 "serial_number": "SPDK00000000000001", 00:25:17.703 "model_number": "SPDK bdev Controller", 00:25:17.703 "max_namespaces": 10, 00:25:17.703 "min_cntlid": 1, 00:25:17.703 "max_cntlid": 65519, 00:25:17.703 "ana_reporting": false 00:25:17.703 } 00:25:17.703 }, 00:25:17.703 { 00:25:17.703 "method": "nvmf_subsystem_add_host", 00:25:17.703 "params": { 00:25:17.703 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:17.703 "host": "nqn.2016-06.io.spdk:host1", 00:25:17.703 "psk": "key0" 00:25:17.703 } 00:25:17.703 }, 00:25:17.703 { 00:25:17.703 "method": "nvmf_subsystem_add_ns", 00:25:17.703 "params": { 00:25:17.703 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:17.703 "namespace": { 00:25:17.703 "nsid": 1, 00:25:17.703 "bdev_name": "malloc0", 00:25:17.703 "nguid": "0D0A7746FE51434DBA0A321F01A61B00", 00:25:17.703 "uuid": "0d0a7746-fe51-434d-ba0a-321f01a61b00", 00:25:17.703 "no_auto_visible": false 00:25:17.703 } 00:25:17.703 } 00:25:17.703 }, 00:25:17.703 { 00:25:17.703 "method": "nvmf_subsystem_add_listener", 00:25:17.703 "params": { 00:25:17.703 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:17.703 "listen_address": { 00:25:17.703 "trtype": "TCP", 00:25:17.703 "adrfam": "IPv4", 00:25:17.703 "traddr": "10.0.0.2", 00:25:17.703 "trsvcid": "4420" 00:25:17.703 }, 00:25:17.703 "secure_channel": true 00:25:17.703 } 00:25:17.703 } 00:25:17.703 ] 00:25:17.703 } 00:25:17.703 ] 00:25:17.703 }' 00:25:17.703 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:17.965 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:25:17.965 "subsystems": [ 00:25:17.965 { 00:25:17.965 "subsystem": "keyring", 00:25:17.965 "config": [ 00:25:17.965 { 00:25:17.965 "method": "keyring_file_add_key", 00:25:17.965 "params": { 00:25:17.965 "name": "key0", 00:25:17.965 "path": "/tmp/tmp.92hhUuDJ2e" 00:25:17.965 } 00:25:17.965 } 00:25:17.965 ] 00:25:17.965 }, 00:25:17.965 { 00:25:17.965 "subsystem": "iobuf", 00:25:17.965 "config": [ 00:25:17.965 { 00:25:17.965 "method": "iobuf_set_options", 00:25:17.965 "params": { 00:25:17.965 "small_pool_count": 8192, 00:25:17.965 "large_pool_count": 1024, 00:25:17.965 "small_bufsize": 8192, 00:25:17.965 "large_bufsize": 135168, 00:25:17.965 "enable_numa": false 00:25:17.965 } 00:25:17.965 } 00:25:17.965 ] 00:25:17.965 }, 00:25:17.965 { 00:25:17.965 "subsystem": "sock", 00:25:17.965 "config": [ 00:25:17.965 { 00:25:17.965 "method": "sock_set_default_impl", 00:25:17.965 "params": { 00:25:17.965 "impl_name": "posix" 00:25:17.965 } 00:25:17.965 }, 00:25:17.965 { 00:25:17.965 "method": "sock_impl_set_options", 00:25:17.965 "params": { 00:25:17.965 "impl_name": "ssl", 00:25:17.965 "recv_buf_size": 4096, 00:25:17.965 "send_buf_size": 4096, 00:25:17.965 "enable_recv_pipe": true, 00:25:17.965 "enable_quickack": false, 00:25:17.965 "enable_placement_id": 0, 00:25:17.965 "enable_zerocopy_send_server": true, 00:25:17.965 "enable_zerocopy_send_client": false, 00:25:17.965 "zerocopy_threshold": 0, 00:25:17.965 "tls_version": 0, 00:25:17.965 "enable_ktls": false 00:25:17.965 } 00:25:17.965 }, 00:25:17.965 { 00:25:17.965 "method": "sock_impl_set_options", 00:25:17.965 "params": { 00:25:17.965 "impl_name": "posix", 00:25:17.965 "recv_buf_size": 2097152, 00:25:17.965 "send_buf_size": 2097152, 00:25:17.965 "enable_recv_pipe": true, 00:25:17.965 "enable_quickack": false, 00:25:17.965 "enable_placement_id": 0, 00:25:17.965 "enable_zerocopy_send_server": true, 00:25:17.965 "enable_zerocopy_send_client": false, 00:25:17.965 "zerocopy_threshold": 0, 00:25:17.965 "tls_version": 0, 00:25:17.965 "enable_ktls": false 00:25:17.965 } 00:25:17.965 } 00:25:17.965 ] 00:25:17.965 }, 00:25:17.965 { 00:25:17.965 "subsystem": "vmd", 00:25:17.965 "config": [] 00:25:17.965 }, 00:25:17.965 { 00:25:17.965 "subsystem": "accel", 00:25:17.965 "config": [ 00:25:17.965 { 00:25:17.965 "method": "accel_set_options", 00:25:17.965 "params": { 00:25:17.965 "small_cache_size": 128, 00:25:17.965 "large_cache_size": 16, 00:25:17.965 "task_count": 2048, 00:25:17.965 "sequence_count": 2048, 00:25:17.965 "buf_count": 2048 00:25:17.965 } 00:25:17.965 } 00:25:17.965 ] 00:25:17.965 }, 00:25:17.965 { 00:25:17.965 "subsystem": "bdev", 00:25:17.965 "config": [ 00:25:17.965 { 00:25:17.965 "method": "bdev_set_options", 00:25:17.965 "params": { 00:25:17.965 "bdev_io_pool_size": 65535, 00:25:17.965 "bdev_io_cache_size": 256, 00:25:17.965 "bdev_auto_examine": true, 00:25:17.965 "iobuf_small_cache_size": 128, 00:25:17.965 "iobuf_large_cache_size": 16 00:25:17.965 } 00:25:17.965 }, 00:25:17.965 { 00:25:17.965 "method": "bdev_raid_set_options", 00:25:17.965 "params": { 00:25:17.965 "process_window_size_kb": 1024, 00:25:17.965 "process_max_bandwidth_mb_sec": 0 00:25:17.965 } 00:25:17.965 }, 00:25:17.965 { 00:25:17.965 "method": "bdev_iscsi_set_options", 00:25:17.965 "params": { 00:25:17.965 "timeout_sec": 30 00:25:17.965 } 00:25:17.965 }, 00:25:17.965 { 00:25:17.965 "method": "bdev_nvme_set_options", 00:25:17.965 "params": { 00:25:17.965 "action_on_timeout": "none", 00:25:17.965 "timeout_us": 0, 00:25:17.965 "timeout_admin_us": 0, 00:25:17.965 "keep_alive_timeout_ms": 10000, 00:25:17.965 "arbitration_burst": 0, 00:25:17.965 "low_priority_weight": 0, 00:25:17.965 "medium_priority_weight": 0, 00:25:17.965 "high_priority_weight": 0, 00:25:17.965 "nvme_adminq_poll_period_us": 10000, 00:25:17.965 "nvme_ioq_poll_period_us": 0, 00:25:17.965 "io_queue_requests": 512, 00:25:17.965 "delay_cmd_submit": true, 00:25:17.965 "transport_retry_count": 4, 00:25:17.965 "bdev_retry_count": 3, 00:25:17.965 "transport_ack_timeout": 0, 00:25:17.965 "ctrlr_loss_timeout_sec": 0, 00:25:17.965 "reconnect_delay_sec": 0, 00:25:17.965 "fast_io_fail_timeout_sec": 0, 00:25:17.965 "disable_auto_failback": false, 00:25:17.965 "generate_uuids": false, 00:25:17.965 "transport_tos": 0, 00:25:17.965 "nvme_error_stat": false, 00:25:17.965 "rdma_srq_size": 0, 00:25:17.965 "io_path_stat": false, 00:25:17.965 "allow_accel_sequence": false, 00:25:17.965 "rdma_max_cq_size": 0, 00:25:17.965 "rdma_cm_event_timeout_ms": 0, 00:25:17.965 "dhchap_digests": [ 00:25:17.965 "sha256", 00:25:17.965 "sha384", 00:25:17.965 "sha512" 00:25:17.965 ], 00:25:17.965 "dhchap_dhgroups": [ 00:25:17.965 "null", 00:25:17.965 "ffdhe2048", 00:25:17.965 "ffdhe3072", 00:25:17.965 "ffdhe4096", 00:25:17.965 "ffdhe6144", 00:25:17.965 "ffdhe8192" 00:25:17.965 ] 00:25:17.965 } 00:25:17.965 }, 00:25:17.965 { 00:25:17.965 "method": "bdev_nvme_attach_controller", 00:25:17.965 "params": { 00:25:17.965 "name": "TLSTEST", 00:25:17.965 "trtype": "TCP", 00:25:17.965 "adrfam": "IPv4", 00:25:17.965 "traddr": "10.0.0.2", 00:25:17.965 "trsvcid": "4420", 00:25:17.965 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:17.965 "prchk_reftag": false, 00:25:17.965 "prchk_guard": false, 00:25:17.965 "ctrlr_loss_timeout_sec": 0, 00:25:17.965 "reconnect_delay_sec": 0, 00:25:17.966 "fast_io_fail_timeout_sec": 0, 00:25:17.966 "psk": "key0", 00:25:17.966 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:17.966 "hdgst": false, 00:25:17.966 "ddgst": false, 00:25:17.966 "multipath": "multipath" 00:25:17.966 } 00:25:17.966 }, 00:25:17.966 { 00:25:17.966 "method": "bdev_nvme_set_hotplug", 00:25:17.966 "params": { 00:25:17.966 "period_us": 100000, 00:25:17.966 "enable": false 00:25:17.966 } 00:25:17.966 }, 00:25:17.966 { 00:25:17.966 "method": "bdev_wait_for_examine" 00:25:17.966 } 00:25:17.966 ] 00:25:17.966 }, 00:25:17.966 { 00:25:17.966 "subsystem": "nbd", 00:25:17.966 "config": [] 00:25:17.966 } 00:25:17.966 ] 00:25:17.966 }' 00:25:17.966 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3455452 00:25:17.966 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3455452 ']' 00:25:17.966 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3455452 00:25:17.966 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:17.966 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:17.966 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3455452 00:25:17.966 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:17.966 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:17.966 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3455452' 00:25:17.966 killing process with pid 3455452 00:25:17.966 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3455452 00:25:17.966 Received shutdown signal, test time was about 10.000000 seconds 00:25:17.966 00:25:17.966 Latency(us) 00:25:17.966 [2024-11-25T13:23:23.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.966 [2024-11-25T13:23:23.056Z] =================================================================================================================== 00:25:17.966 [2024-11-25T13:23:23.056Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:17.966 14:23:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3455452 00:25:18.228 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3454896 00:25:18.228 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3454896 ']' 00:25:18.228 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3454896 00:25:18.228 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:18.228 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:18.228 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3454896 00:25:18.228 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:18.228 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:18.228 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3454896' 00:25:18.228 killing process with pid 3454896 00:25:18.228 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3454896 00:25:18.228 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3454896 00:25:18.228 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:25:18.228 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:18.228 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:18.228 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:18.228 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:25:18.228 "subsystems": [ 00:25:18.228 { 00:25:18.228 "subsystem": "keyring", 00:25:18.228 "config": [ 00:25:18.228 { 00:25:18.228 "method": "keyring_file_add_key", 00:25:18.228 "params": { 00:25:18.228 "name": "key0", 00:25:18.228 "path": "/tmp/tmp.92hhUuDJ2e" 00:25:18.228 } 00:25:18.228 } 00:25:18.228 ] 00:25:18.228 }, 00:25:18.228 { 00:25:18.228 "subsystem": "iobuf", 00:25:18.228 "config": [ 00:25:18.228 { 00:25:18.228 "method": "iobuf_set_options", 00:25:18.228 "params": { 00:25:18.228 "small_pool_count": 8192, 00:25:18.228 "large_pool_count": 1024, 00:25:18.228 "small_bufsize": 8192, 00:25:18.228 "large_bufsize": 135168, 00:25:18.228 "enable_numa": false 00:25:18.228 } 00:25:18.228 } 00:25:18.228 ] 00:25:18.228 }, 00:25:18.228 { 00:25:18.228 "subsystem": "sock", 00:25:18.228 "config": [ 00:25:18.228 { 00:25:18.228 "method": "sock_set_default_impl", 00:25:18.228 "params": { 00:25:18.228 "impl_name": "posix" 00:25:18.228 } 00:25:18.228 }, 00:25:18.228 { 00:25:18.228 "method": "sock_impl_set_options", 00:25:18.228 "params": { 00:25:18.228 "impl_name": "ssl", 00:25:18.228 "recv_buf_size": 4096, 00:25:18.228 "send_buf_size": 4096, 00:25:18.228 "enable_recv_pipe": true, 00:25:18.228 "enable_quickack": false, 00:25:18.228 "enable_placement_id": 0, 00:25:18.228 "enable_zerocopy_send_server": true, 00:25:18.228 "enable_zerocopy_send_client": false, 00:25:18.228 "zerocopy_threshold": 0, 00:25:18.228 "tls_version": 0, 00:25:18.228 "enable_ktls": false 00:25:18.228 } 00:25:18.228 }, 00:25:18.228 { 00:25:18.228 "method": "sock_impl_set_options", 00:25:18.228 "params": { 00:25:18.229 "impl_name": "posix", 00:25:18.229 "recv_buf_size": 2097152, 00:25:18.229 "send_buf_size": 2097152, 00:25:18.229 "enable_recv_pipe": true, 00:25:18.229 "enable_quickack": false, 00:25:18.229 "enable_placement_id": 0, 00:25:18.229 "enable_zerocopy_send_server": true, 00:25:18.229 "enable_zerocopy_send_client": false, 00:25:18.229 "zerocopy_threshold": 0, 00:25:18.229 "tls_version": 0, 00:25:18.229 "enable_ktls": false 00:25:18.229 } 00:25:18.229 } 00:25:18.229 ] 00:25:18.229 }, 00:25:18.229 { 00:25:18.229 "subsystem": "vmd", 00:25:18.229 "config": [] 00:25:18.229 }, 00:25:18.229 { 00:25:18.229 "subsystem": "accel", 00:25:18.229 "config": [ 00:25:18.229 { 00:25:18.229 "method": "accel_set_options", 00:25:18.229 "params": { 00:25:18.229 "small_cache_size": 128, 00:25:18.229 "large_cache_size": 16, 00:25:18.229 "task_count": 2048, 00:25:18.229 "sequence_count": 2048, 00:25:18.229 "buf_count": 2048 00:25:18.229 } 00:25:18.229 } 00:25:18.229 ] 00:25:18.229 }, 00:25:18.229 { 00:25:18.229 "subsystem": "bdev", 00:25:18.229 "config": [ 00:25:18.229 { 00:25:18.229 "method": "bdev_set_options", 00:25:18.229 "params": { 00:25:18.229 "bdev_io_pool_size": 65535, 00:25:18.229 "bdev_io_cache_size": 256, 00:25:18.229 "bdev_auto_examine": true, 00:25:18.229 "iobuf_small_cache_size": 128, 00:25:18.229 "iobuf_large_cache_size": 16 00:25:18.229 } 00:25:18.229 }, 00:25:18.229 { 00:25:18.229 "method": "bdev_raid_set_options", 00:25:18.229 "params": { 00:25:18.229 "process_window_size_kb": 1024, 00:25:18.229 "process_max_bandwidth_mb_sec": 0 00:25:18.229 } 00:25:18.229 }, 00:25:18.229 { 00:25:18.229 "method": "bdev_iscsi_set_options", 00:25:18.229 "params": { 00:25:18.229 "timeout_sec": 30 00:25:18.229 } 00:25:18.229 }, 00:25:18.229 { 00:25:18.229 "method": "bdev_nvme_set_options", 00:25:18.229 "params": { 00:25:18.229 "action_on_timeout": "none", 00:25:18.229 "timeout_us": 0, 00:25:18.229 "timeout_admin_us": 0, 00:25:18.229 "keep_alive_timeout_ms": 10000, 00:25:18.229 "arbitration_burst": 0, 00:25:18.229 "low_priority_weight": 0, 00:25:18.229 "medium_priority_weight": 0, 00:25:18.229 "high_priority_weight": 0, 00:25:18.229 "nvme_adminq_poll_period_us": 10000, 00:25:18.229 "nvme_ioq_poll_period_us": 0, 00:25:18.229 "io_queue_requests": 0, 00:25:18.229 "delay_cmd_submit": true, 00:25:18.229 "transport_retry_count": 4, 00:25:18.229 "bdev_retry_count": 3, 00:25:18.229 "transport_ack_timeout": 0, 00:25:18.229 "ctrlr_loss_timeout_sec": 0, 00:25:18.229 "reconnect_delay_sec": 0, 00:25:18.229 "fast_io_fail_timeout_sec": 0, 00:25:18.229 "disable_auto_failback": false, 00:25:18.229 "generate_uuids": false, 00:25:18.229 "transport_tos": 0, 00:25:18.229 "nvme_error_stat": false, 00:25:18.229 "rdma_srq_size": 0, 00:25:18.229 "io_path_stat": false, 00:25:18.229 "allow_accel_sequence": false, 00:25:18.229 "rdma_max_cq_size": 0, 00:25:18.229 "rdma_cm_event_timeout_ms": 0, 00:25:18.229 "dhchap_digests": [ 00:25:18.229 "sha256", 00:25:18.229 "sha384", 00:25:18.229 "sha512" 00:25:18.229 ], 00:25:18.229 "dhchap_dhgroups": [ 00:25:18.229 "null", 00:25:18.229 "ffdhe2048", 00:25:18.229 "ffdhe3072", 00:25:18.229 "ffdhe4096", 00:25:18.229 "ffdhe6144", 00:25:18.229 "ffdhe8192" 00:25:18.229 ] 00:25:18.229 } 00:25:18.229 }, 00:25:18.229 { 00:25:18.229 "method": "bdev_nvme_set_hotplug", 00:25:18.229 "params": { 00:25:18.229 "period_us": 100000, 00:25:18.229 "enable": false 00:25:18.229 } 00:25:18.229 }, 00:25:18.229 { 00:25:18.229 "method": "bdev_malloc_create", 00:25:18.229 "params": { 00:25:18.229 "name": "malloc0", 00:25:18.229 "num_blocks": 8192, 00:25:18.229 "block_size": 4096, 00:25:18.229 "physical_block_size": 4096, 00:25:18.229 "uuid": "0d0a7746-fe51-434d-ba0a-321f01a61b00", 00:25:18.229 "optimal_io_boundary": 0, 00:25:18.229 "md_size": 0, 00:25:18.229 "dif_type": 0, 00:25:18.229 "dif_is_head_of_md": false, 00:25:18.229 "dif_pi_format": 0 00:25:18.229 } 00:25:18.229 }, 00:25:18.229 { 00:25:18.229 "method": "bdev_wait_for_examine" 00:25:18.229 } 00:25:18.229 ] 00:25:18.229 }, 00:25:18.229 { 00:25:18.229 "subsystem": "nbd", 00:25:18.229 "config": [] 00:25:18.229 }, 00:25:18.229 { 00:25:18.229 "subsystem": "scheduler", 00:25:18.229 "config": [ 00:25:18.229 { 00:25:18.229 "method": "framework_set_scheduler", 00:25:18.229 "params": { 00:25:18.229 "name": "static" 00:25:18.229 } 00:25:18.229 } 00:25:18.229 ] 00:25:18.229 }, 00:25:18.229 { 00:25:18.229 "subsystem": "nvmf", 00:25:18.229 "config": [ 00:25:18.229 { 00:25:18.229 "method": "nvmf_set_config", 00:25:18.229 "params": { 00:25:18.229 "discovery_filter": "match_any", 00:25:18.229 "admin_cmd_passthru": { 00:25:18.229 "identify_ctrlr": false 00:25:18.229 }, 00:25:18.229 "dhchap_digests": [ 00:25:18.229 "sha256", 00:25:18.229 "sha384", 00:25:18.229 "sha512" 00:25:18.229 ], 00:25:18.229 "dhchap_dhgroups": [ 00:25:18.229 "null", 00:25:18.229 "ffdhe2048", 00:25:18.229 "ffdhe3072", 00:25:18.229 "ffdhe4096", 00:25:18.229 "ffdhe6144", 00:25:18.229 "ffdhe8192" 00:25:18.229 ] 00:25:18.229 } 00:25:18.229 }, 00:25:18.229 { 00:25:18.229 "method": "nvmf_set_max_subsystems", 00:25:18.229 "params": { 00:25:18.229 "max_subsystems": 1024 00:25:18.229 } 00:25:18.229 }, 00:25:18.229 { 00:25:18.229 "method": "nvmf_set_crdt", 00:25:18.229 "params": { 00:25:18.229 "crdt1": 0, 00:25:18.229 "crdt2": 0, 00:25:18.229 "crdt3": 0 00:25:18.229 } 00:25:18.229 }, 00:25:18.229 { 00:25:18.229 "method": "nvmf_create_transport", 00:25:18.229 "params": { 00:25:18.229 "trtype": "TCP", 00:25:18.229 "max_queue_depth": 128, 00:25:18.229 "max_io_qpairs_per_ctrlr": 127, 00:25:18.229 "in_capsule_data_size": 4096, 00:25:18.229 "max_io_size": 131072, 00:25:18.229 "io_unit_size": 131072, 00:25:18.229 "max_aq_depth": 128, 00:25:18.229 "num_shared_buffers": 511, 00:25:18.229 "buf_cache_size": 4294967295, 00:25:18.229 "dif_insert_or_strip": false, 00:25:18.229 "zcopy": false, 00:25:18.229 "c2h_success": false, 00:25:18.229 "sock_priority": 0, 00:25:18.229 "abort_timeout_sec": 1, 00:25:18.229 "ack_timeout": 0, 00:25:18.229 "data_wr_pool_size": 0 00:25:18.229 } 00:25:18.229 }, 00:25:18.229 { 00:25:18.229 "method": "nvmf_create_subsystem", 00:25:18.229 "params": { 00:25:18.229 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:18.229 "allow_any_host": false, 00:25:18.229 "serial_number": "SPDK00000000000001", 00:25:18.229 "model_number": "SPDK bdev Controller", 00:25:18.229 "max_namespaces": 10, 00:25:18.229 "min_cntlid": 1, 00:25:18.229 "max_cntlid": 65519, 00:25:18.229 "ana_reporting": false 00:25:18.229 } 00:25:18.229 }, 00:25:18.229 { 00:25:18.229 "method": "nvmf_subsystem_add_host", 00:25:18.229 "params": { 00:25:18.229 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:18.229 "host": "nqn.2016-06.io.spdk:host1", 00:25:18.229 "psk": "key0" 00:25:18.229 } 00:25:18.229 }, 00:25:18.229 { 00:25:18.229 "method": "nvmf_subsystem_add_ns", 00:25:18.229 "params": { 00:25:18.229 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:18.229 "namespace": { 00:25:18.229 "nsid": 1, 00:25:18.229 "bdev_name": "malloc0", 00:25:18.229 "nguid": "0D0A7746FE51434DBA0A321F01A61B00", 00:25:18.229 "uuid": "0d0a7746-fe51-434d-ba0a-321f01a61b00", 00:25:18.229 "no_auto_visible": false 00:25:18.229 } 00:25:18.229 } 00:25:18.229 }, 00:25:18.229 { 00:25:18.229 "method": "nvmf_subsystem_add_listener", 00:25:18.229 "params": { 00:25:18.229 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:18.229 "listen_address": { 00:25:18.229 "trtype": "TCP", 00:25:18.229 "adrfam": "IPv4", 00:25:18.229 "traddr": "10.0.0.2", 00:25:18.229 "trsvcid": "4420" 00:25:18.229 }, 00:25:18.229 "secure_channel": true 00:25:18.229 } 00:25:18.229 } 00:25:18.229 ] 00:25:18.229 } 00:25:18.229 ] 00:25:18.229 }' 00:25:18.229 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3455898 00:25:18.229 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3455898 00:25:18.229 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:25:18.229 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3455898 ']' 00:25:18.229 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.229 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:18.229 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.229 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:18.229 14:23:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:18.497 [2024-11-25 14:23:23.326686] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:25:18.497 [2024-11-25 14:23:23.326744] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.497 [2024-11-25 14:23:23.414350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.497 [2024-11-25 14:23:23.443878] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.497 [2024-11-25 14:23:23.443908] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.497 [2024-11-25 14:23:23.443914] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.497 [2024-11-25 14:23:23.443918] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.497 [2024-11-25 14:23:23.443922] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.497 [2024-11-25 14:23:23.444396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.796 [2024-11-25 14:23:23.636624] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.796 [2024-11-25 14:23:23.668652] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:18.796 [2024-11-25 14:23:23.668855] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:19.092 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:19.092 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:19.092 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:19.092 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:19.092 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:19.092 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:19.092 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3455959 00:25:19.092 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3455959 /var/tmp/bdevperf.sock 00:25:19.092 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3455959 ']' 00:25:19.092 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:19.092 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:19.092 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:25:19.092 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:19.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:19.092 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:19.092 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:19.092 14:23:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:25:19.092 "subsystems": [ 00:25:19.092 { 00:25:19.092 "subsystem": "keyring", 00:25:19.092 "config": [ 00:25:19.092 { 00:25:19.092 "method": "keyring_file_add_key", 00:25:19.092 "params": { 00:25:19.092 "name": "key0", 00:25:19.092 "path": "/tmp/tmp.92hhUuDJ2e" 00:25:19.092 } 00:25:19.092 } 00:25:19.092 ] 00:25:19.092 }, 00:25:19.092 { 00:25:19.092 "subsystem": "iobuf", 00:25:19.092 "config": [ 00:25:19.092 { 00:25:19.092 "method": "iobuf_set_options", 00:25:19.092 "params": { 00:25:19.092 "small_pool_count": 8192, 00:25:19.092 "large_pool_count": 1024, 00:25:19.092 "small_bufsize": 8192, 00:25:19.092 "large_bufsize": 135168, 00:25:19.092 "enable_numa": false 00:25:19.092 } 00:25:19.092 } 00:25:19.092 ] 00:25:19.092 }, 00:25:19.092 { 00:25:19.092 "subsystem": "sock", 00:25:19.092 "config": [ 00:25:19.092 { 00:25:19.092 "method": "sock_set_default_impl", 00:25:19.092 "params": { 00:25:19.092 "impl_name": "posix" 00:25:19.092 } 00:25:19.092 }, 00:25:19.092 { 00:25:19.092 "method": "sock_impl_set_options", 00:25:19.092 "params": { 00:25:19.092 "impl_name": "ssl", 00:25:19.092 "recv_buf_size": 4096, 00:25:19.092 "send_buf_size": 4096, 00:25:19.092 "enable_recv_pipe": true, 00:25:19.092 "enable_quickack": false, 00:25:19.092 "enable_placement_id": 0, 00:25:19.092 "enable_zerocopy_send_server": true, 00:25:19.092 "enable_zerocopy_send_client": false, 00:25:19.092 "zerocopy_threshold": 0, 00:25:19.092 "tls_version": 0, 00:25:19.092 "enable_ktls": false 00:25:19.092 } 00:25:19.092 }, 00:25:19.092 { 00:25:19.092 "method": "sock_impl_set_options", 00:25:19.092 "params": { 00:25:19.092 "impl_name": "posix", 00:25:19.092 "recv_buf_size": 2097152, 00:25:19.092 "send_buf_size": 2097152, 00:25:19.092 "enable_recv_pipe": true, 00:25:19.092 "enable_quickack": false, 00:25:19.092 "enable_placement_id": 0, 00:25:19.092 "enable_zerocopy_send_server": true, 00:25:19.092 "enable_zerocopy_send_client": false, 00:25:19.092 "zerocopy_threshold": 0, 00:25:19.092 "tls_version": 0, 00:25:19.092 "enable_ktls": false 00:25:19.092 } 00:25:19.092 } 00:25:19.092 ] 00:25:19.092 }, 00:25:19.092 { 00:25:19.092 "subsystem": "vmd", 00:25:19.092 "config": [] 00:25:19.092 }, 00:25:19.092 { 00:25:19.092 "subsystem": "accel", 00:25:19.092 "config": [ 00:25:19.092 { 00:25:19.092 "method": "accel_set_options", 00:25:19.092 "params": { 00:25:19.092 "small_cache_size": 128, 00:25:19.092 "large_cache_size": 16, 00:25:19.092 "task_count": 2048, 00:25:19.092 "sequence_count": 2048, 00:25:19.092 "buf_count": 2048 00:25:19.092 } 00:25:19.092 } 00:25:19.092 ] 00:25:19.092 }, 00:25:19.092 { 00:25:19.092 "subsystem": "bdev", 00:25:19.092 "config": [ 00:25:19.092 { 00:25:19.093 "method": "bdev_set_options", 00:25:19.093 "params": { 00:25:19.093 "bdev_io_pool_size": 65535, 00:25:19.093 "bdev_io_cache_size": 256, 00:25:19.093 "bdev_auto_examine": true, 00:25:19.093 "iobuf_small_cache_size": 128, 00:25:19.093 "iobuf_large_cache_size": 16 00:25:19.093 } 00:25:19.093 }, 00:25:19.093 { 00:25:19.093 "method": "bdev_raid_set_options", 00:25:19.093 "params": { 00:25:19.093 "process_window_size_kb": 1024, 00:25:19.093 "process_max_bandwidth_mb_sec": 0 00:25:19.093 } 00:25:19.093 }, 00:25:19.093 { 00:25:19.093 "method": "bdev_iscsi_set_options", 00:25:19.093 "params": { 00:25:19.093 "timeout_sec": 30 00:25:19.093 } 00:25:19.093 }, 00:25:19.093 { 00:25:19.093 "method": "bdev_nvme_set_options", 00:25:19.093 "params": { 00:25:19.093 "action_on_timeout": "none", 00:25:19.093 "timeout_us": 0, 00:25:19.093 "timeout_admin_us": 0, 00:25:19.093 "keep_alive_timeout_ms": 10000, 00:25:19.093 "arbitration_burst": 0, 00:25:19.093 "low_priority_weight": 0, 00:25:19.093 "medium_priority_weight": 0, 00:25:19.093 "high_priority_weight": 0, 00:25:19.093 "nvme_adminq_poll_period_us": 10000, 00:25:19.093 "nvme_ioq_poll_period_us": 0, 00:25:19.093 "io_queue_requests": 512, 00:25:19.093 "delay_cmd_submit": true, 00:25:19.093 "transport_retry_count": 4, 00:25:19.093 "bdev_retry_count": 3, 00:25:19.093 "transport_ack_timeout": 0, 00:25:19.093 "ctrlr_loss_timeout_sec": 0, 00:25:19.093 "reconnect_delay_sec": 0, 00:25:19.093 "fast_io_fail_timeout_sec": 0, 00:25:19.093 "disable_auto_failback": false, 00:25:19.093 "generate_uuids": false, 00:25:19.093 "transport_tos": 0, 00:25:19.093 "nvme_error_stat": false, 00:25:19.093 "rdma_srq_size": 0, 00:25:19.093 "io_path_stat": false, 00:25:19.093 "allow_accel_sequence": false, 00:25:19.093 "rdma_max_cq_size": 0, 00:25:19.093 "rdma_cm_event_timeout_ms": 0, 00:25:19.093 "dhchap_digests": [ 00:25:19.093 "sha256", 00:25:19.093 "sha384", 00:25:19.093 "sha512" 00:25:19.093 ], 00:25:19.093 "dhchap_dhgroups": [ 00:25:19.093 "null", 00:25:19.093 "ffdhe2048", 00:25:19.093 "ffdhe3072", 00:25:19.093 "ffdhe4096", 00:25:19.093 "ffdhe6144", 00:25:19.093 "ffdhe8192" 00:25:19.093 ] 00:25:19.093 } 00:25:19.093 }, 00:25:19.093 { 00:25:19.093 "method": "bdev_nvme_attach_controller", 00:25:19.093 "params": { 00:25:19.093 "name": "TLSTEST", 00:25:19.093 "trtype": "TCP", 00:25:19.093 "adrfam": "IPv4", 00:25:19.093 "traddr": "10.0.0.2", 00:25:19.093 "trsvcid": "4420", 00:25:19.093 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:19.093 "prchk_reftag": false, 00:25:19.093 "prchk_guard": false, 00:25:19.093 "ctrlr_loss_timeout_sec": 0, 00:25:19.093 "reconnect_delay_sec": 0, 00:25:19.093 "fast_io_fail_timeout_sec": 0, 00:25:19.093 "psk": "key0", 00:25:19.093 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:19.093 "hdgst": false, 00:25:19.093 "ddgst": false, 00:25:19.093 "multipath": "multipath" 00:25:19.093 } 00:25:19.093 }, 00:25:19.093 { 00:25:19.093 "method": "bdev_nvme_set_hotplug", 00:25:19.093 "params": { 00:25:19.093 "period_us": 100000, 00:25:19.093 "enable": false 00:25:19.093 } 00:25:19.093 }, 00:25:19.093 { 00:25:19.093 "method": "bdev_wait_for_examine" 00:25:19.093 } 00:25:19.093 ] 00:25:19.093 }, 00:25:19.093 { 00:25:19.093 "subsystem": "nbd", 00:25:19.093 "config": [] 00:25:19.093 } 00:25:19.093 ] 00:25:19.093 }' 00:25:19.419 [2024-11-25 14:23:24.203694] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:25:19.419 [2024-11-25 14:23:24.203747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3455959 ] 00:25:19.419 [2024-11-25 14:23:24.290947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.419 [2024-11-25 14:23:24.326349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:19.419 [2024-11-25 14:23:24.465772] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:19.989 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:19.989 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:19.989 14:23:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:20.248 Running I/O for 10 seconds... 00:25:22.128 5279.00 IOPS, 20.62 MiB/s [2024-11-25T13:23:28.159Z] 4884.50 IOPS, 19.08 MiB/s [2024-11-25T13:23:29.545Z] 5298.67 IOPS, 20.70 MiB/s [2024-11-25T13:23:30.117Z] 5233.75 IOPS, 20.44 MiB/s [2024-11-25T13:23:31.503Z] 5032.00 IOPS, 19.66 MiB/s [2024-11-25T13:23:32.445Z] 5145.00 IOPS, 20.10 MiB/s [2024-11-25T13:23:33.387Z] 5273.00 IOPS, 20.60 MiB/s [2024-11-25T13:23:34.329Z] 5258.25 IOPS, 20.54 MiB/s [2024-11-25T13:23:35.270Z] 5309.44 IOPS, 20.74 MiB/s [2024-11-25T13:23:35.270Z] 5320.70 IOPS, 20.78 MiB/s 00:25:30.180 Latency(us) 00:25:30.180 [2024-11-25T13:23:35.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.180 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:30.180 Verification LBA range: start 0x0 length 0x2000 00:25:30.180 TLSTESTn1 : 10.01 5327.19 20.81 0.00 0.00 23994.12 4915.20 37792.43 00:25:30.180 [2024-11-25T13:23:35.270Z] =================================================================================================================== 00:25:30.180 [2024-11-25T13:23:35.270Z] Total : 5327.19 20.81 0.00 0.00 23994.12 4915.20 37792.43 00:25:30.180 { 00:25:30.180 "results": [ 00:25:30.180 { 00:25:30.180 "job": "TLSTESTn1", 00:25:30.180 "core_mask": "0x4", 00:25:30.180 "workload": "verify", 00:25:30.180 "status": "finished", 00:25:30.180 "verify_range": { 00:25:30.180 "start": 0, 00:25:30.180 "length": 8192 00:25:30.180 }, 00:25:30.180 "queue_depth": 128, 00:25:30.180 "io_size": 4096, 00:25:30.180 "runtime": 10.011659, 00:25:30.180 "iops": 5327.189030309562, 00:25:30.180 "mibps": 20.80933214964673, 00:25:30.180 "io_failed": 0, 00:25:30.180 "io_timeout": 0, 00:25:30.180 "avg_latency_us": 23994.12413744828, 00:25:30.180 "min_latency_us": 4915.2, 00:25:30.180 "max_latency_us": 37792.426666666666 00:25:30.180 } 00:25:30.180 ], 00:25:30.180 "core_count": 1 00:25:30.180 } 00:25:30.180 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:30.180 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3455959 00:25:30.180 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3455959 ']' 00:25:30.180 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3455959 00:25:30.180 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:30.180 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:30.180 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3455959 00:25:30.180 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:30.180 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:30.180 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3455959' 00:25:30.180 killing process with pid 3455959 00:25:30.180 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3455959 00:25:30.180 Received shutdown signal, test time was about 10.000000 seconds 00:25:30.180 00:25:30.180 Latency(us) 00:25:30.180 [2024-11-25T13:23:35.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.180 [2024-11-25T13:23:35.270Z] =================================================================================================================== 00:25:30.180 [2024-11-25T13:23:35.271Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:30.181 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3455959 00:25:30.442 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3455898 00:25:30.442 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3455898 ']' 00:25:30.442 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3455898 00:25:30.442 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:30.442 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:30.442 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3455898 00:25:30.442 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:30.442 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:30.442 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3455898' 00:25:30.442 killing process with pid 3455898 00:25:30.442 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3455898 00:25:30.442 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3455898 00:25:30.442 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:25:30.442 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:30.442 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:30.442 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:30.442 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3458281 00:25:30.442 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3458281 00:25:30.442 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:30.442 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3458281 ']' 00:25:30.442 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.442 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:30.442 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.442 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:30.442 14:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:30.716 [2024-11-25 14:23:35.555545] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:25:30.716 [2024-11-25 14:23:35.555603] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.716 [2024-11-25 14:23:35.648688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.716 [2024-11-25 14:23:35.689118] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:30.716 [2024-11-25 14:23:35.689173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:30.716 [2024-11-25 14:23:35.689182] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:30.716 [2024-11-25 14:23:35.689189] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:30.716 [2024-11-25 14:23:35.689194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:30.716 [2024-11-25 14:23:35.689860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.288 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:31.288 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:31.288 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:31.288 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:31.288 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:31.550 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:31.550 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.92hhUuDJ2e 00:25:31.550 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.92hhUuDJ2e 00:25:31.550 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:31.550 [2024-11-25 14:23:36.560671] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.550 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:31.811 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:32.072 [2024-11-25 14:23:36.913562] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:32.072 [2024-11-25 14:23:36.913904] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:32.072 14:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:32.072 malloc0 00:25:32.072 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:32.333 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.92hhUuDJ2e 00:25:32.595 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:32.595 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3458648 00:25:32.595 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:32.595 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:32.595 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3458648 /var/tmp/bdevperf.sock 00:25:32.595 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3458648 ']' 00:25:32.595 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:32.595 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:32.595 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:32.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:32.595 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:32.595 14:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:32.855 [2024-11-25 14:23:37.691475] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:25:32.855 [2024-11-25 14:23:37.691552] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3458648 ] 00:25:32.855 [2024-11-25 14:23:37.778345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.855 [2024-11-25 14:23:37.812696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.469 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:33.469 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:33.469 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.92hhUuDJ2e 00:25:33.729 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:33.729 [2024-11-25 14:23:38.810392] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:33.989 nvme0n1 00:25:33.989 14:23:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:33.989 Running I/O for 1 seconds... 00:25:34.933 5766.00 IOPS, 22.52 MiB/s 00:25:34.933 Latency(us) 00:25:34.933 [2024-11-25T13:23:40.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.933 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:34.933 Verification LBA range: start 0x0 length 0x2000 00:25:34.933 nvme0n1 : 1.02 5801.43 22.66 0.00 0.00 21887.41 5079.04 31894.19 00:25:34.933 [2024-11-25T13:23:40.023Z] =================================================================================================================== 00:25:34.933 [2024-11-25T13:23:40.023Z] Total : 5801.43 22.66 0.00 0.00 21887.41 5079.04 31894.19 00:25:34.933 { 00:25:34.933 "results": [ 00:25:34.933 { 00:25:34.933 "job": "nvme0n1", 00:25:34.933 "core_mask": "0x2", 00:25:34.933 "workload": "verify", 00:25:34.933 "status": "finished", 00:25:34.933 "verify_range": { 00:25:34.933 "start": 0, 00:25:34.933 "length": 8192 00:25:34.933 }, 00:25:34.933 "queue_depth": 128, 00:25:34.933 "io_size": 4096, 00:25:34.933 "runtime": 1.015957, 00:25:34.933 "iops": 5801.426635182394, 00:25:34.933 "mibps": 22.661822793681228, 00:25:34.933 "io_failed": 0, 00:25:34.933 "io_timeout": 0, 00:25:34.933 "avg_latency_us": 21887.40524375071, 00:25:34.933 "min_latency_us": 5079.04, 00:25:34.933 "max_latency_us": 31894.18666666667 00:25:34.933 } 00:25:34.933 ], 00:25:34.933 "core_count": 1 00:25:34.933 } 00:25:35.194 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3458648 00:25:35.194 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3458648 ']' 00:25:35.194 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3458648 00:25:35.194 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:35.195 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:35.195 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3458648 00:25:35.195 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:35.195 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:35.195 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3458648' 00:25:35.195 killing process with pid 3458648 00:25:35.195 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3458648 00:25:35.195 Received shutdown signal, test time was about 1.000000 seconds 00:25:35.195 00:25:35.195 Latency(us) 00:25:35.195 [2024-11-25T13:23:40.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.195 [2024-11-25T13:23:40.285Z] =================================================================================================================== 00:25:35.195 [2024-11-25T13:23:40.285Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:35.195 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3458648 00:25:35.195 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3458281 00:25:35.195 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3458281 ']' 00:25:35.195 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3458281 00:25:35.195 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:35.195 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:35.195 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3458281 00:25:35.195 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:35.195 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:35.195 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3458281' 00:25:35.195 killing process with pid 3458281 00:25:35.195 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3458281 00:25:35.195 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3458281 00:25:35.456 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:25:35.456 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:35.456 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:35.456 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:35.456 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3459271 00:25:35.456 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:35.456 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3459271 00:25:35.456 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3459271 ']' 00:25:35.456 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:35.456 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:35.456 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:35.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:35.456 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:35.456 14:23:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:35.456 [2024-11-25 14:23:40.477174] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:25:35.456 [2024-11-25 14:23:40.477240] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:35.718 [2024-11-25 14:23:40.572980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.718 [2024-11-25 14:23:40.622740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:35.718 [2024-11-25 14:23:40.622791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:35.718 [2024-11-25 14:23:40.622799] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:35.718 [2024-11-25 14:23:40.622806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:35.718 [2024-11-25 14:23:40.622812] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:35.718 [2024-11-25 14:23:40.623612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.289 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:36.289 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:36.289 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:36.289 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:36.289 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:36.289 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:36.289 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:25:36.289 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.289 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:36.289 [2024-11-25 14:23:41.344796] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:36.289 malloc0 00:25:36.289 [2024-11-25 14:23:41.374805] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:36.289 [2024-11-25 14:23:41.375125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:36.550 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.550 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3459360 00:25:36.550 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3459360 /var/tmp/bdevperf.sock 00:25:36.550 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:36.550 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3459360 ']' 00:25:36.550 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:36.550 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:36.550 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:36.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:36.550 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:36.550 14:23:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:36.550 [2024-11-25 14:23:41.465854] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:25:36.550 [2024-11-25 14:23:41.465934] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3459360 ] 00:25:36.550 [2024-11-25 14:23:41.553038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.550 [2024-11-25 14:23:41.587734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.491 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:37.492 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:37.492 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.92hhUuDJ2e 00:25:37.492 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:37.752 [2024-11-25 14:23:42.609886] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:37.752 nvme0n1 00:25:37.752 14:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:37.752 Running I/O for 1 seconds... 00:25:39.136 5991.00 IOPS, 23.40 MiB/s 00:25:39.136 Latency(us) 00:25:39.136 [2024-11-25T13:23:44.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.137 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:39.137 Verification LBA range: start 0x0 length 0x2000 00:25:39.137 nvme0n1 : 1.01 6029.72 23.55 0.00 0.00 21076.52 6116.69 21408.43 00:25:39.137 [2024-11-25T13:23:44.227Z] =================================================================================================================== 00:25:39.137 [2024-11-25T13:23:44.227Z] Total : 6029.72 23.55 0.00 0.00 21076.52 6116.69 21408.43 00:25:39.137 { 00:25:39.137 "results": [ 00:25:39.137 { 00:25:39.137 "job": "nvme0n1", 00:25:39.137 "core_mask": "0x2", 00:25:39.137 "workload": "verify", 00:25:39.137 "status": "finished", 00:25:39.137 "verify_range": { 00:25:39.137 "start": 0, 00:25:39.137 "length": 8192 00:25:39.137 }, 00:25:39.137 "queue_depth": 128, 00:25:39.137 "io_size": 4096, 00:25:39.137 "runtime": 1.014806, 00:25:39.137 "iops": 6029.723907820805, 00:25:39.137 "mibps": 23.55360901492502, 00:25:39.137 "io_failed": 0, 00:25:39.137 "io_timeout": 0, 00:25:39.137 "avg_latency_us": 21076.522813095824, 00:25:39.137 "min_latency_us": 6116.693333333334, 00:25:39.137 "max_latency_us": 21408.426666666666 00:25:39.137 } 00:25:39.137 ], 00:25:39.137 "core_count": 1 00:25:39.137 } 00:25:39.137 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:25:39.137 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.137 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:39.137 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.137 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:25:39.137 "subsystems": [ 00:25:39.137 { 00:25:39.137 "subsystem": "keyring", 00:25:39.137 "config": [ 00:25:39.137 { 00:25:39.137 "method": "keyring_file_add_key", 00:25:39.137 "params": { 00:25:39.137 "name": "key0", 00:25:39.137 "path": "/tmp/tmp.92hhUuDJ2e" 00:25:39.137 } 00:25:39.137 } 00:25:39.137 ] 00:25:39.137 }, 00:25:39.137 { 00:25:39.137 "subsystem": "iobuf", 00:25:39.137 "config": [ 00:25:39.137 { 00:25:39.137 "method": "iobuf_set_options", 00:25:39.137 "params": { 00:25:39.137 "small_pool_count": 8192, 00:25:39.137 "large_pool_count": 1024, 00:25:39.137 "small_bufsize": 8192, 00:25:39.137 "large_bufsize": 135168, 00:25:39.137 "enable_numa": false 00:25:39.137 } 00:25:39.137 } 00:25:39.137 ] 00:25:39.137 }, 00:25:39.137 { 00:25:39.137 "subsystem": "sock", 00:25:39.137 "config": [ 00:25:39.137 { 00:25:39.137 "method": "sock_set_default_impl", 00:25:39.137 "params": { 00:25:39.137 "impl_name": "posix" 00:25:39.137 } 00:25:39.137 }, 00:25:39.137 { 00:25:39.137 "method": "sock_impl_set_options", 00:25:39.137 "params": { 00:25:39.137 "impl_name": "ssl", 00:25:39.137 "recv_buf_size": 4096, 00:25:39.137 "send_buf_size": 4096, 00:25:39.137 "enable_recv_pipe": true, 00:25:39.137 "enable_quickack": false, 00:25:39.137 "enable_placement_id": 0, 00:25:39.137 "enable_zerocopy_send_server": true, 00:25:39.137 "enable_zerocopy_send_client": false, 00:25:39.137 "zerocopy_threshold": 0, 00:25:39.137 "tls_version": 0, 00:25:39.137 "enable_ktls": false 00:25:39.137 } 00:25:39.137 }, 00:25:39.137 { 00:25:39.137 "method": "sock_impl_set_options", 00:25:39.137 "params": { 00:25:39.137 "impl_name": "posix", 00:25:39.137 "recv_buf_size": 2097152, 00:25:39.137 "send_buf_size": 2097152, 00:25:39.137 "enable_recv_pipe": true, 00:25:39.137 "enable_quickack": false, 00:25:39.137 "enable_placement_id": 0, 00:25:39.137 "enable_zerocopy_send_server": true, 00:25:39.137 "enable_zerocopy_send_client": false, 00:25:39.137 "zerocopy_threshold": 0, 00:25:39.137 "tls_version": 0, 00:25:39.137 "enable_ktls": false 00:25:39.137 } 00:25:39.137 } 00:25:39.137 ] 00:25:39.137 }, 00:25:39.137 { 00:25:39.137 "subsystem": "vmd", 00:25:39.137 "config": [] 00:25:39.137 }, 00:25:39.137 { 00:25:39.137 "subsystem": "accel", 00:25:39.137 "config": [ 00:25:39.137 { 00:25:39.137 "method": "accel_set_options", 00:25:39.137 "params": { 00:25:39.137 "small_cache_size": 128, 00:25:39.137 "large_cache_size": 16, 00:25:39.137 "task_count": 2048, 00:25:39.137 "sequence_count": 2048, 00:25:39.137 "buf_count": 2048 00:25:39.137 } 00:25:39.137 } 00:25:39.137 ] 00:25:39.137 }, 00:25:39.137 { 00:25:39.137 "subsystem": "bdev", 00:25:39.137 "config": [ 00:25:39.137 { 00:25:39.137 "method": "bdev_set_options", 00:25:39.137 "params": { 00:25:39.137 "bdev_io_pool_size": 65535, 00:25:39.137 "bdev_io_cache_size": 256, 00:25:39.137 "bdev_auto_examine": true, 00:25:39.137 "iobuf_small_cache_size": 128, 00:25:39.137 "iobuf_large_cache_size": 16 00:25:39.137 } 00:25:39.137 }, 00:25:39.137 { 00:25:39.137 "method": "bdev_raid_set_options", 00:25:39.137 "params": { 00:25:39.137 "process_window_size_kb": 1024, 00:25:39.137 "process_max_bandwidth_mb_sec": 0 00:25:39.137 } 00:25:39.137 }, 00:25:39.137 { 00:25:39.137 "method": "bdev_iscsi_set_options", 00:25:39.137 "params": { 00:25:39.137 "timeout_sec": 30 00:25:39.137 } 00:25:39.137 }, 00:25:39.137 { 00:25:39.137 "method": "bdev_nvme_set_options", 00:25:39.137 "params": { 00:25:39.137 "action_on_timeout": "none", 00:25:39.137 "timeout_us": 0, 00:25:39.137 "timeout_admin_us": 0, 00:25:39.137 "keep_alive_timeout_ms": 10000, 00:25:39.137 "arbitration_burst": 0, 00:25:39.137 "low_priority_weight": 0, 00:25:39.137 "medium_priority_weight": 0, 00:25:39.137 "high_priority_weight": 0, 00:25:39.137 "nvme_adminq_poll_period_us": 10000, 00:25:39.137 "nvme_ioq_poll_period_us": 0, 00:25:39.137 "io_queue_requests": 0, 00:25:39.137 "delay_cmd_submit": true, 00:25:39.137 "transport_retry_count": 4, 00:25:39.137 "bdev_retry_count": 3, 00:25:39.137 "transport_ack_timeout": 0, 00:25:39.137 "ctrlr_loss_timeout_sec": 0, 00:25:39.137 "reconnect_delay_sec": 0, 00:25:39.137 "fast_io_fail_timeout_sec": 0, 00:25:39.137 "disable_auto_failback": false, 00:25:39.137 "generate_uuids": false, 00:25:39.137 "transport_tos": 0, 00:25:39.137 "nvme_error_stat": false, 00:25:39.137 "rdma_srq_size": 0, 00:25:39.137 "io_path_stat": false, 00:25:39.137 "allow_accel_sequence": false, 00:25:39.137 "rdma_max_cq_size": 0, 00:25:39.137 "rdma_cm_event_timeout_ms": 0, 00:25:39.137 "dhchap_digests": [ 00:25:39.137 "sha256", 00:25:39.137 "sha384", 00:25:39.137 "sha512" 00:25:39.137 ], 00:25:39.137 "dhchap_dhgroups": [ 00:25:39.137 "null", 00:25:39.137 "ffdhe2048", 00:25:39.137 "ffdhe3072", 00:25:39.137 "ffdhe4096", 00:25:39.137 "ffdhe6144", 00:25:39.137 "ffdhe8192" 00:25:39.137 ] 00:25:39.137 } 00:25:39.137 }, 00:25:39.137 { 00:25:39.137 "method": "bdev_nvme_set_hotplug", 00:25:39.137 "params": { 00:25:39.137 "period_us": 100000, 00:25:39.137 "enable": false 00:25:39.137 } 00:25:39.137 }, 00:25:39.137 { 00:25:39.137 "method": "bdev_malloc_create", 00:25:39.137 "params": { 00:25:39.137 "name": "malloc0", 00:25:39.137 "num_blocks": 8192, 00:25:39.137 "block_size": 4096, 00:25:39.137 "physical_block_size": 4096, 00:25:39.137 "uuid": "798bf158-c7b1-4601-989f-37cafb26ad15", 00:25:39.137 "optimal_io_boundary": 0, 00:25:39.137 "md_size": 0, 00:25:39.137 "dif_type": 0, 00:25:39.137 "dif_is_head_of_md": false, 00:25:39.137 "dif_pi_format": 0 00:25:39.137 } 00:25:39.137 }, 00:25:39.137 { 00:25:39.137 "method": "bdev_wait_for_examine" 00:25:39.137 } 00:25:39.137 ] 00:25:39.137 }, 00:25:39.137 { 00:25:39.137 "subsystem": "nbd", 00:25:39.137 "config": [] 00:25:39.137 }, 00:25:39.137 { 00:25:39.138 "subsystem": "scheduler", 00:25:39.138 "config": [ 00:25:39.138 { 00:25:39.138 "method": "framework_set_scheduler", 00:25:39.138 "params": { 00:25:39.138 "name": "static" 00:25:39.138 } 00:25:39.138 } 00:25:39.138 ] 00:25:39.138 }, 00:25:39.138 { 00:25:39.138 "subsystem": "nvmf", 00:25:39.138 "config": [ 00:25:39.138 { 00:25:39.138 "method": "nvmf_set_config", 00:25:39.138 "params": { 00:25:39.138 "discovery_filter": "match_any", 00:25:39.138 "admin_cmd_passthru": { 00:25:39.138 "identify_ctrlr": false 00:25:39.138 }, 00:25:39.138 "dhchap_digests": [ 00:25:39.138 "sha256", 00:25:39.138 "sha384", 00:25:39.138 "sha512" 00:25:39.138 ], 00:25:39.138 "dhchap_dhgroups": [ 00:25:39.138 "null", 00:25:39.138 "ffdhe2048", 00:25:39.138 "ffdhe3072", 00:25:39.138 "ffdhe4096", 00:25:39.138 "ffdhe6144", 00:25:39.138 "ffdhe8192" 00:25:39.138 ] 00:25:39.138 } 00:25:39.138 }, 00:25:39.138 { 00:25:39.138 "method": "nvmf_set_max_subsystems", 00:25:39.138 "params": { 00:25:39.138 "max_subsystems": 1024 00:25:39.138 } 00:25:39.138 }, 00:25:39.138 { 00:25:39.138 "method": "nvmf_set_crdt", 00:25:39.138 "params": { 00:25:39.138 "crdt1": 0, 00:25:39.138 "crdt2": 0, 00:25:39.138 "crdt3": 0 00:25:39.138 } 00:25:39.138 }, 00:25:39.138 { 00:25:39.138 "method": "nvmf_create_transport", 00:25:39.138 "params": { 00:25:39.138 "trtype": "TCP", 00:25:39.138 "max_queue_depth": 128, 00:25:39.138 "max_io_qpairs_per_ctrlr": 127, 00:25:39.138 "in_capsule_data_size": 4096, 00:25:39.138 "max_io_size": 131072, 00:25:39.138 "io_unit_size": 131072, 00:25:39.138 "max_aq_depth": 128, 00:25:39.138 "num_shared_buffers": 511, 00:25:39.138 "buf_cache_size": 4294967295, 00:25:39.138 "dif_insert_or_strip": false, 00:25:39.138 "zcopy": false, 00:25:39.138 "c2h_success": false, 00:25:39.138 "sock_priority": 0, 00:25:39.138 "abort_timeout_sec": 1, 00:25:39.138 "ack_timeout": 0, 00:25:39.138 "data_wr_pool_size": 0 00:25:39.138 } 00:25:39.138 }, 00:25:39.138 { 00:25:39.138 "method": "nvmf_create_subsystem", 00:25:39.138 "params": { 00:25:39.138 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:39.138 "allow_any_host": false, 00:25:39.138 "serial_number": "00000000000000000000", 00:25:39.138 "model_number": "SPDK bdev Controller", 00:25:39.138 "max_namespaces": 32, 00:25:39.138 "min_cntlid": 1, 00:25:39.138 "max_cntlid": 65519, 00:25:39.138 "ana_reporting": false 00:25:39.138 } 00:25:39.138 }, 00:25:39.138 { 00:25:39.138 "method": "nvmf_subsystem_add_host", 00:25:39.138 "params": { 00:25:39.138 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:39.138 "host": "nqn.2016-06.io.spdk:host1", 00:25:39.138 "psk": "key0" 00:25:39.138 } 00:25:39.138 }, 00:25:39.138 { 00:25:39.138 "method": "nvmf_subsystem_add_ns", 00:25:39.138 "params": { 00:25:39.138 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:39.138 "namespace": { 00:25:39.138 "nsid": 1, 00:25:39.138 "bdev_name": "malloc0", 00:25:39.138 "nguid": "798BF158C7B14601989F37CAFB26AD15", 00:25:39.138 "uuid": "798bf158-c7b1-4601-989f-37cafb26ad15", 00:25:39.138 "no_auto_visible": false 00:25:39.138 } 00:25:39.138 } 00:25:39.138 }, 00:25:39.138 { 00:25:39.138 "method": "nvmf_subsystem_add_listener", 00:25:39.138 "params": { 00:25:39.138 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:39.138 "listen_address": { 00:25:39.138 "trtype": "TCP", 00:25:39.138 "adrfam": "IPv4", 00:25:39.138 "traddr": "10.0.0.2", 00:25:39.138 "trsvcid": "4420" 00:25:39.138 }, 00:25:39.138 "secure_channel": false, 00:25:39.138 "sock_impl": "ssl" 00:25:39.138 } 00:25:39.138 } 00:25:39.138 ] 00:25:39.138 } 00:25:39.138 ] 00:25:39.138 }' 00:25:39.138 14:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:39.138 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:25:39.138 "subsystems": [ 00:25:39.138 { 00:25:39.138 "subsystem": "keyring", 00:25:39.138 "config": [ 00:25:39.138 { 00:25:39.138 "method": "keyring_file_add_key", 00:25:39.138 "params": { 00:25:39.138 "name": "key0", 00:25:39.138 "path": "/tmp/tmp.92hhUuDJ2e" 00:25:39.138 } 00:25:39.138 } 00:25:39.138 ] 00:25:39.138 }, 00:25:39.138 { 00:25:39.138 "subsystem": "iobuf", 00:25:39.138 "config": [ 00:25:39.138 { 00:25:39.138 "method": "iobuf_set_options", 00:25:39.138 "params": { 00:25:39.138 "small_pool_count": 8192, 00:25:39.138 "large_pool_count": 1024, 00:25:39.138 "small_bufsize": 8192, 00:25:39.138 "large_bufsize": 135168, 00:25:39.138 "enable_numa": false 00:25:39.138 } 00:25:39.138 } 00:25:39.138 ] 00:25:39.138 }, 00:25:39.138 { 00:25:39.138 "subsystem": "sock", 00:25:39.138 "config": [ 00:25:39.138 { 00:25:39.138 "method": "sock_set_default_impl", 00:25:39.138 "params": { 00:25:39.138 "impl_name": "posix" 00:25:39.138 } 00:25:39.138 }, 00:25:39.138 { 00:25:39.138 "method": "sock_impl_set_options", 00:25:39.138 "params": { 00:25:39.138 "impl_name": "ssl", 00:25:39.138 "recv_buf_size": 4096, 00:25:39.138 "send_buf_size": 4096, 00:25:39.138 "enable_recv_pipe": true, 00:25:39.138 "enable_quickack": false, 00:25:39.138 "enable_placement_id": 0, 00:25:39.138 "enable_zerocopy_send_server": true, 00:25:39.138 "enable_zerocopy_send_client": false, 00:25:39.138 "zerocopy_threshold": 0, 00:25:39.138 "tls_version": 0, 00:25:39.138 "enable_ktls": false 00:25:39.138 } 00:25:39.138 }, 00:25:39.138 { 00:25:39.138 "method": "sock_impl_set_options", 00:25:39.138 "params": { 00:25:39.138 "impl_name": "posix", 00:25:39.138 "recv_buf_size": 2097152, 00:25:39.138 "send_buf_size": 2097152, 00:25:39.138 "enable_recv_pipe": true, 00:25:39.138 "enable_quickack": false, 00:25:39.138 "enable_placement_id": 0, 00:25:39.138 "enable_zerocopy_send_server": true, 00:25:39.138 "enable_zerocopy_send_client": false, 00:25:39.138 "zerocopy_threshold": 0, 00:25:39.138 "tls_version": 0, 00:25:39.138 "enable_ktls": false 00:25:39.138 } 00:25:39.138 } 00:25:39.138 ] 00:25:39.138 }, 00:25:39.138 { 00:25:39.138 "subsystem": "vmd", 00:25:39.138 "config": [] 00:25:39.138 }, 00:25:39.138 { 00:25:39.138 "subsystem": "accel", 00:25:39.138 "config": [ 00:25:39.138 { 00:25:39.138 "method": "accel_set_options", 00:25:39.138 "params": { 00:25:39.138 "small_cache_size": 128, 00:25:39.138 "large_cache_size": 16, 00:25:39.138 "task_count": 2048, 00:25:39.138 "sequence_count": 2048, 00:25:39.138 "buf_count": 2048 00:25:39.138 } 00:25:39.138 } 00:25:39.138 ] 00:25:39.138 }, 00:25:39.138 { 00:25:39.138 "subsystem": "bdev", 00:25:39.138 "config": [ 00:25:39.138 { 00:25:39.138 "method": "bdev_set_options", 00:25:39.138 "params": { 00:25:39.138 "bdev_io_pool_size": 65535, 00:25:39.138 "bdev_io_cache_size": 256, 00:25:39.138 "bdev_auto_examine": true, 00:25:39.138 "iobuf_small_cache_size": 128, 00:25:39.138 "iobuf_large_cache_size": 16 00:25:39.138 } 00:25:39.138 }, 00:25:39.138 { 00:25:39.138 "method": "bdev_raid_set_options", 00:25:39.138 "params": { 00:25:39.138 "process_window_size_kb": 1024, 00:25:39.138 "process_max_bandwidth_mb_sec": 0 00:25:39.138 } 00:25:39.138 }, 00:25:39.138 { 00:25:39.138 "method": "bdev_iscsi_set_options", 00:25:39.138 "params": { 00:25:39.138 "timeout_sec": 30 00:25:39.138 } 00:25:39.138 }, 00:25:39.138 { 00:25:39.138 "method": "bdev_nvme_set_options", 00:25:39.138 "params": { 00:25:39.138 "action_on_timeout": "none", 00:25:39.138 "timeout_us": 0, 00:25:39.138 "timeout_admin_us": 0, 00:25:39.138 "keep_alive_timeout_ms": 10000, 00:25:39.138 "arbitration_burst": 0, 00:25:39.138 "low_priority_weight": 0, 00:25:39.138 "medium_priority_weight": 0, 00:25:39.138 "high_priority_weight": 0, 00:25:39.138 "nvme_adminq_poll_period_us": 10000, 00:25:39.138 "nvme_ioq_poll_period_us": 0, 00:25:39.138 "io_queue_requests": 512, 00:25:39.138 "delay_cmd_submit": true, 00:25:39.138 "transport_retry_count": 4, 00:25:39.138 "bdev_retry_count": 3, 00:25:39.138 "transport_ack_timeout": 0, 00:25:39.138 "ctrlr_loss_timeout_sec": 0, 00:25:39.138 "reconnect_delay_sec": 0, 00:25:39.139 "fast_io_fail_timeout_sec": 0, 00:25:39.139 "disable_auto_failback": false, 00:25:39.139 "generate_uuids": false, 00:25:39.139 "transport_tos": 0, 00:25:39.139 "nvme_error_stat": false, 00:25:39.139 "rdma_srq_size": 0, 00:25:39.139 "io_path_stat": false, 00:25:39.139 "allow_accel_sequence": false, 00:25:39.139 "rdma_max_cq_size": 0, 00:25:39.139 "rdma_cm_event_timeout_ms": 0, 00:25:39.139 "dhchap_digests": [ 00:25:39.139 "sha256", 00:25:39.139 "sha384", 00:25:39.139 "sha512" 00:25:39.139 ], 00:25:39.139 "dhchap_dhgroups": [ 00:25:39.139 "null", 00:25:39.139 "ffdhe2048", 00:25:39.139 "ffdhe3072", 00:25:39.139 "ffdhe4096", 00:25:39.139 "ffdhe6144", 00:25:39.139 "ffdhe8192" 00:25:39.139 ] 00:25:39.139 } 00:25:39.139 }, 00:25:39.139 { 00:25:39.139 "method": "bdev_nvme_attach_controller", 00:25:39.139 "params": { 00:25:39.139 "name": "nvme0", 00:25:39.139 "trtype": "TCP", 00:25:39.139 "adrfam": "IPv4", 00:25:39.139 "traddr": "10.0.0.2", 00:25:39.139 "trsvcid": "4420", 00:25:39.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:39.139 "prchk_reftag": false, 00:25:39.139 "prchk_guard": false, 00:25:39.139 "ctrlr_loss_timeout_sec": 0, 00:25:39.139 "reconnect_delay_sec": 0, 00:25:39.139 "fast_io_fail_timeout_sec": 0, 00:25:39.139 "psk": "key0", 00:25:39.139 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:39.139 "hdgst": false, 00:25:39.139 "ddgst": false, 00:25:39.139 "multipath": "multipath" 00:25:39.139 } 00:25:39.139 }, 00:25:39.139 { 00:25:39.139 "method": "bdev_nvme_set_hotplug", 00:25:39.139 "params": { 00:25:39.139 "period_us": 100000, 00:25:39.139 "enable": false 00:25:39.139 } 00:25:39.139 }, 00:25:39.139 { 00:25:39.139 "method": "bdev_enable_histogram", 00:25:39.139 "params": { 00:25:39.139 "name": "nvme0n1", 00:25:39.139 "enable": true 00:25:39.139 } 00:25:39.139 }, 00:25:39.139 { 00:25:39.139 "method": "bdev_wait_for_examine" 00:25:39.139 } 00:25:39.139 ] 00:25:39.139 }, 00:25:39.139 { 00:25:39.139 "subsystem": "nbd", 00:25:39.139 "config": [] 00:25:39.139 } 00:25:39.139 ] 00:25:39.139 }' 00:25:39.139 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3459360 00:25:39.139 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3459360 ']' 00:25:39.139 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3459360 00:25:39.139 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:39.139 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:39.139 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3459360 00:25:39.401 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:39.401 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:39.401 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3459360' 00:25:39.401 killing process with pid 3459360 00:25:39.401 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3459360 00:25:39.401 Received shutdown signal, test time was about 1.000000 seconds 00:25:39.401 00:25:39.401 Latency(us) 00:25:39.401 [2024-11-25T13:23:44.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.401 [2024-11-25T13:23:44.491Z] =================================================================================================================== 00:25:39.401 [2024-11-25T13:23:44.491Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:39.401 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3459360 00:25:39.401 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3459271 00:25:39.401 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3459271 ']' 00:25:39.401 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3459271 00:25:39.401 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:39.401 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:39.401 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3459271 00:25:39.401 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:39.401 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:39.401 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3459271' 00:25:39.401 killing process with pid 3459271 00:25:39.401 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3459271 00:25:39.401 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3459271 00:25:39.663 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:25:39.663 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:39.663 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:39.663 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:25:39.663 "subsystems": [ 00:25:39.663 { 00:25:39.663 "subsystem": "keyring", 00:25:39.663 "config": [ 00:25:39.663 { 00:25:39.663 "method": "keyring_file_add_key", 00:25:39.663 "params": { 00:25:39.663 "name": "key0", 00:25:39.663 "path": "/tmp/tmp.92hhUuDJ2e" 00:25:39.663 } 00:25:39.663 } 00:25:39.663 ] 00:25:39.663 }, 00:25:39.663 { 00:25:39.663 "subsystem": "iobuf", 00:25:39.663 "config": [ 00:25:39.663 { 00:25:39.663 "method": "iobuf_set_options", 00:25:39.663 "params": { 00:25:39.663 "small_pool_count": 8192, 00:25:39.663 "large_pool_count": 1024, 00:25:39.663 "small_bufsize": 8192, 00:25:39.663 "large_bufsize": 135168, 00:25:39.663 "enable_numa": false 00:25:39.663 } 00:25:39.663 } 00:25:39.663 ] 00:25:39.663 }, 00:25:39.663 { 00:25:39.663 "subsystem": "sock", 00:25:39.663 "config": [ 00:25:39.663 { 00:25:39.663 "method": "sock_set_default_impl", 00:25:39.663 "params": { 00:25:39.663 "impl_name": "posix" 00:25:39.663 } 00:25:39.663 }, 00:25:39.663 { 00:25:39.663 "method": "sock_impl_set_options", 00:25:39.663 "params": { 00:25:39.663 "impl_name": "ssl", 00:25:39.663 "recv_buf_size": 4096, 00:25:39.663 "send_buf_size": 4096, 00:25:39.663 "enable_recv_pipe": true, 00:25:39.663 "enable_quickack": false, 00:25:39.663 "enable_placement_id": 0, 00:25:39.663 "enable_zerocopy_send_server": true, 00:25:39.663 "enable_zerocopy_send_client": false, 00:25:39.663 "zerocopy_threshold": 0, 00:25:39.663 "tls_version": 0, 00:25:39.663 "enable_ktls": false 00:25:39.663 } 00:25:39.663 }, 00:25:39.663 { 00:25:39.663 "method": "sock_impl_set_options", 00:25:39.663 "params": { 00:25:39.663 "impl_name": "posix", 00:25:39.663 "recv_buf_size": 2097152, 00:25:39.663 "send_buf_size": 2097152, 00:25:39.663 "enable_recv_pipe": true, 00:25:39.663 "enable_quickack": false, 00:25:39.663 "enable_placement_id": 0, 00:25:39.663 "enable_zerocopy_send_server": true, 00:25:39.663 "enable_zerocopy_send_client": false, 00:25:39.663 "zerocopy_threshold": 0, 00:25:39.663 "tls_version": 0, 00:25:39.663 "enable_ktls": false 00:25:39.663 } 00:25:39.663 } 00:25:39.663 ] 00:25:39.663 }, 00:25:39.663 { 00:25:39.663 "subsystem": "vmd", 00:25:39.663 "config": [] 00:25:39.663 }, 00:25:39.663 { 00:25:39.663 "subsystem": "accel", 00:25:39.663 "config": [ 00:25:39.663 { 00:25:39.663 "method": "accel_set_options", 00:25:39.663 "params": { 00:25:39.663 "small_cache_size": 128, 00:25:39.663 "large_cache_size": 16, 00:25:39.663 "task_count": 2048, 00:25:39.663 "sequence_count": 2048, 00:25:39.663 "buf_count": 2048 00:25:39.663 } 00:25:39.663 } 00:25:39.663 ] 00:25:39.663 }, 00:25:39.663 { 00:25:39.663 "subsystem": "bdev", 00:25:39.663 "config": [ 00:25:39.663 { 00:25:39.663 "method": "bdev_set_options", 00:25:39.663 "params": { 00:25:39.663 "bdev_io_pool_size": 65535, 00:25:39.663 "bdev_io_cache_size": 256, 00:25:39.663 "bdev_auto_examine": true, 00:25:39.663 "iobuf_small_cache_size": 128, 00:25:39.663 "iobuf_large_cache_size": 16 00:25:39.663 } 00:25:39.663 }, 00:25:39.663 { 00:25:39.663 "method": "bdev_raid_set_options", 00:25:39.663 "params": { 00:25:39.663 "process_window_size_kb": 1024, 00:25:39.663 "process_max_bandwidth_mb_sec": 0 00:25:39.663 } 00:25:39.663 }, 00:25:39.663 { 00:25:39.663 "method": "bdev_iscsi_set_options", 00:25:39.663 "params": { 00:25:39.663 "timeout_sec": 30 00:25:39.663 } 00:25:39.663 }, 00:25:39.663 { 00:25:39.663 "method": "bdev_nvme_set_options", 00:25:39.663 "params": { 00:25:39.663 "action_on_timeout": "none", 00:25:39.663 "timeout_us": 0, 00:25:39.663 "timeout_admin_us": 0, 00:25:39.663 "keep_alive_timeout_ms": 10000, 00:25:39.663 "arbitration_burst": 0, 00:25:39.663 "low_priority_weight": 0, 00:25:39.663 "medium_priority_weight": 0, 00:25:39.663 "high_priority_weight": 0, 00:25:39.663 "nvme_adminq_poll_period_us": 10000, 00:25:39.663 "nvme_ioq_poll_period_us": 0, 00:25:39.663 "io_queue_requests": 0, 00:25:39.663 "delay_cmd_submit": true, 00:25:39.663 "transport_retry_count": 4, 00:25:39.663 "bdev_retry_count": 3, 00:25:39.663 "transport_ack_timeout": 0, 00:25:39.663 "ctrlr_loss_timeout_sec": 0, 00:25:39.663 "reconnect_delay_sec": 0, 00:25:39.663 "fast_io_fail_timeout_sec": 0, 00:25:39.663 "disable_auto_failback": false, 00:25:39.663 "generate_uuids": false, 00:25:39.663 "transport_tos": 0, 00:25:39.663 "nvme_error_stat": false, 00:25:39.663 "rdma_srq_size": 0, 00:25:39.663 "io_path_stat": false, 00:25:39.663 "allow_accel_sequence": false, 00:25:39.664 "rdma_max_cq_size": 0, 00:25:39.664 "rdma_cm_event_timeout_ms": 0, 00:25:39.664 "dhchap_digests": [ 00:25:39.664 "sha256", 00:25:39.664 "sha384", 00:25:39.664 "sha512" 00:25:39.664 ], 00:25:39.664 "dhchap_dhgroups": [ 00:25:39.664 "null", 00:25:39.664 "ffdhe2048", 00:25:39.664 "ffdhe3072", 00:25:39.664 "ffdhe4096", 00:25:39.664 "ffdhe6144", 00:25:39.664 "ffdhe8192" 00:25:39.664 ] 00:25:39.664 } 00:25:39.664 }, 00:25:39.664 { 00:25:39.664 "method": "bdev_nvme_set_hotplug", 00:25:39.664 "params": { 00:25:39.664 "period_us": 100000, 00:25:39.664 "enable": false 00:25:39.664 } 00:25:39.664 }, 00:25:39.664 { 00:25:39.664 "method": "bdev_malloc_create", 00:25:39.664 "params": { 00:25:39.664 "name": "malloc0", 00:25:39.664 "num_blocks": 8192, 00:25:39.664 "block_size": 4096, 00:25:39.664 "physical_block_size": 4096, 00:25:39.664 "uuid": "798bf158-c7b1-4601-989f-37cafb26ad15", 00:25:39.664 "optimal_io_boundary": 0, 00:25:39.664 "md_size": 0, 00:25:39.664 "dif_type": 0, 00:25:39.664 "dif_is_head_of_md": false, 00:25:39.664 "dif_pi_format": 0 00:25:39.664 } 00:25:39.664 }, 00:25:39.664 { 00:25:39.664 "method": "bdev_wait_for_examine" 00:25:39.664 } 00:25:39.664 ] 00:25:39.664 }, 00:25:39.664 { 00:25:39.664 "subsystem": "nbd", 00:25:39.664 "config": [] 00:25:39.664 }, 00:25:39.664 { 00:25:39.664 "subsystem": "scheduler", 00:25:39.664 "config": [ 00:25:39.664 { 00:25:39.664 "method": "framework_set_scheduler", 00:25:39.664 "params": { 00:25:39.664 "name": "static" 00:25:39.664 } 00:25:39.664 } 00:25:39.664 ] 00:25:39.664 }, 00:25:39.664 { 00:25:39.664 "subsystem": "nvmf", 00:25:39.664 "config": [ 00:25:39.664 { 00:25:39.664 "method": "nvmf_set_config", 00:25:39.664 "params": { 00:25:39.664 "discovery_filter": "match_any", 00:25:39.664 "admin_cmd_passthru": { 00:25:39.664 "identify_ctrlr": false 00:25:39.664 }, 00:25:39.664 "dhchap_digests": [ 00:25:39.664 "sha256", 00:25:39.664 "sha384", 00:25:39.664 "sha512" 00:25:39.664 ], 00:25:39.664 "dhchap_dhgroups": [ 00:25:39.664 "null", 00:25:39.664 "ffdhe2048", 00:25:39.664 "ffdhe3072", 00:25:39.664 "ffdhe4096", 00:25:39.664 "ffdhe6144", 00:25:39.664 "ffdhe8192" 00:25:39.664 ] 00:25:39.664 } 00:25:39.664 }, 00:25:39.664 { 00:25:39.664 "method": "nvmf_set_max_subsystems", 00:25:39.664 "params": { 00:25:39.664 "max_subsystems": 1024 00:25:39.664 } 00:25:39.664 }, 00:25:39.664 { 00:25:39.664 "method": "nvmf_set_crdt", 00:25:39.664 "params": { 00:25:39.664 "crdt1": 0, 00:25:39.664 "crdt2": 0, 00:25:39.664 "crdt3": 0 00:25:39.664 } 00:25:39.664 }, 00:25:39.664 { 00:25:39.664 "method": "nvmf_create_transport", 00:25:39.664 "params": { 00:25:39.664 "trtype": "TCP", 00:25:39.664 "max_queue_depth": 128, 00:25:39.664 "max_io_qpairs_per_ctrlr": 127, 00:25:39.664 "in_capsule_data_size": 4096, 00:25:39.664 "max_io_size": 131072, 00:25:39.664 "io_unit_size": 131072, 00:25:39.664 "max_aq_depth": 128, 00:25:39.664 "num_shared_buffers": 511, 00:25:39.664 "buf_cache_size": 4294967295, 00:25:39.664 "dif_insert_or_strip": false, 00:25:39.664 "zcopy": false, 00:25:39.664 "c2h_success": false, 00:25:39.664 "sock_priority": 0, 00:25:39.664 "abort_timeout_sec": 1, 00:25:39.664 "ack_timeout": 0, 00:25:39.664 "data_wr_pool_size": 0 00:25:39.664 } 00:25:39.664 }, 00:25:39.664 { 00:25:39.664 "method": "nvmf_create_subsystem", 00:25:39.664 "params": { 00:25:39.664 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:39.664 "allow_any_host": false, 00:25:39.664 "serial_number": "00000000000000000000", 00:25:39.664 "model_number": "SPDK bdev Controller", 00:25:39.664 "max_namespaces": 32, 00:25:39.664 "min_cntlid": 1, 00:25:39.664 "max_cntlid": 65519, 00:25:39.664 "ana_reporting": false 00:25:39.664 } 00:25:39.664 }, 00:25:39.664 { 00:25:39.664 "method": "nvmf_subsystem_add_host", 00:25:39.664 "params": { 00:25:39.664 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:39.664 "host": "nqn.2016-06.io.spdk:host1", 00:25:39.664 "psk": "key0" 00:25:39.664 } 00:25:39.664 }, 00:25:39.664 { 00:25:39.664 "method": "nvmf_subsystem_add_ns", 00:25:39.664 "params": { 00:25:39.664 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:39.664 "namespace": { 00:25:39.664 "nsid": 1, 00:25:39.664 "bdev_name": "malloc0", 00:25:39.664 "nguid": "798BF158C7B14601989F37CAFB26AD15", 00:25:39.664 "uuid": "798bf158-c7b1-4601-989f-37cafb26ad15", 00:25:39.664 "no_auto_visible": false 00:25:39.664 } 00:25:39.664 } 00:25:39.664 }, 00:25:39.664 { 00:25:39.664 "method": "nvmf_subsystem_add_listener", 00:25:39.664 "params": { 00:25:39.664 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:39.664 "listen_address": { 00:25:39.664 "trtype": "TCP", 00:25:39.664 "adrfam": "IPv4", 00:25:39.664 "traddr": "10.0.0.2", 00:25:39.664 "trsvcid": "4420" 00:25:39.664 }, 00:25:39.664 "secure_channel": false, 00:25:39.664 "sock_impl": "ssl" 00:25:39.664 } 00:25:39.664 } 00:25:39.664 ] 00:25:39.664 } 00:25:39.664 ] 00:25:39.664 }' 00:25:39.664 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:39.664 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3460043 00:25:39.664 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3460043 00:25:39.664 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:39.664 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3460043 ']' 00:25:39.664 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.664 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:39.664 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.664 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:39.664 14:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:39.664 [2024-11-25 14:23:44.612792] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:25:39.664 [2024-11-25 14:23:44.612859] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:39.664 [2024-11-25 14:23:44.700840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.664 [2024-11-25 14:23:44.729068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:39.664 [2024-11-25 14:23:44.729094] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:39.664 [2024-11-25 14:23:44.729100] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:39.664 [2024-11-25 14:23:44.729105] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:39.664 [2024-11-25 14:23:44.729108] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:39.664 [2024-11-25 14:23:44.729567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.925 [2024-11-25 14:23:44.922176] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:39.925 [2024-11-25 14:23:44.954202] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:39.925 [2024-11-25 14:23:44.954409] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:40.498 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:40.498 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:40.498 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:40.498 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:40.498 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:40.498 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.498 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3460248 00:25:40.498 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3460248 /var/tmp/bdevperf.sock 00:25:40.498 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3460248 ']' 00:25:40.498 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:40.498 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:40.498 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:40.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:40.498 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:40.498 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:40.498 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:40.498 14:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:25:40.498 "subsystems": [ 00:25:40.498 { 00:25:40.498 "subsystem": "keyring", 00:25:40.498 "config": [ 00:25:40.498 { 00:25:40.498 "method": "keyring_file_add_key", 00:25:40.498 "params": { 00:25:40.498 "name": "key0", 00:25:40.498 "path": "/tmp/tmp.92hhUuDJ2e" 00:25:40.498 } 00:25:40.498 } 00:25:40.498 ] 00:25:40.498 }, 00:25:40.498 { 00:25:40.498 "subsystem": "iobuf", 00:25:40.498 "config": [ 00:25:40.498 { 00:25:40.498 "method": "iobuf_set_options", 00:25:40.498 "params": { 00:25:40.498 "small_pool_count": 8192, 00:25:40.498 "large_pool_count": 1024, 00:25:40.498 "small_bufsize": 8192, 00:25:40.498 "large_bufsize": 135168, 00:25:40.498 "enable_numa": false 00:25:40.498 } 00:25:40.498 } 00:25:40.498 ] 00:25:40.498 }, 00:25:40.498 { 00:25:40.498 "subsystem": "sock", 00:25:40.498 "config": [ 00:25:40.498 { 00:25:40.498 "method": "sock_set_default_impl", 00:25:40.498 "params": { 00:25:40.498 "impl_name": "posix" 00:25:40.498 } 00:25:40.498 }, 00:25:40.498 { 00:25:40.498 "method": "sock_impl_set_options", 00:25:40.498 "params": { 00:25:40.498 "impl_name": "ssl", 00:25:40.498 "recv_buf_size": 4096, 00:25:40.498 "send_buf_size": 4096, 00:25:40.498 "enable_recv_pipe": true, 00:25:40.498 "enable_quickack": false, 00:25:40.498 "enable_placement_id": 0, 00:25:40.498 "enable_zerocopy_send_server": true, 00:25:40.498 "enable_zerocopy_send_client": false, 00:25:40.498 "zerocopy_threshold": 0, 00:25:40.498 "tls_version": 0, 00:25:40.498 "enable_ktls": false 00:25:40.498 } 00:25:40.498 }, 00:25:40.498 { 00:25:40.498 "method": "sock_impl_set_options", 00:25:40.498 "params": { 00:25:40.498 "impl_name": "posix", 00:25:40.498 "recv_buf_size": 2097152, 00:25:40.498 "send_buf_size": 2097152, 00:25:40.498 "enable_recv_pipe": true, 00:25:40.498 "enable_quickack": false, 00:25:40.498 "enable_placement_id": 0, 00:25:40.498 "enable_zerocopy_send_server": true, 00:25:40.498 "enable_zerocopy_send_client": false, 00:25:40.498 "zerocopy_threshold": 0, 00:25:40.498 "tls_version": 0, 00:25:40.498 "enable_ktls": false 00:25:40.498 } 00:25:40.498 } 00:25:40.498 ] 00:25:40.498 }, 00:25:40.498 { 00:25:40.498 "subsystem": "vmd", 00:25:40.498 "config": [] 00:25:40.498 }, 00:25:40.498 { 00:25:40.498 "subsystem": "accel", 00:25:40.498 "config": [ 00:25:40.498 { 00:25:40.498 "method": "accel_set_options", 00:25:40.498 "params": { 00:25:40.498 "small_cache_size": 128, 00:25:40.498 "large_cache_size": 16, 00:25:40.498 "task_count": 2048, 00:25:40.498 "sequence_count": 2048, 00:25:40.498 "buf_count": 2048 00:25:40.498 } 00:25:40.498 } 00:25:40.498 ] 00:25:40.498 }, 00:25:40.498 { 00:25:40.498 "subsystem": "bdev", 00:25:40.498 "config": [ 00:25:40.498 { 00:25:40.498 "method": "bdev_set_options", 00:25:40.498 "params": { 00:25:40.498 "bdev_io_pool_size": 65535, 00:25:40.498 "bdev_io_cache_size": 256, 00:25:40.498 "bdev_auto_examine": true, 00:25:40.498 "iobuf_small_cache_size": 128, 00:25:40.498 "iobuf_large_cache_size": 16 00:25:40.498 } 00:25:40.498 }, 00:25:40.498 { 00:25:40.499 "method": "bdev_raid_set_options", 00:25:40.499 "params": { 00:25:40.499 "process_window_size_kb": 1024, 00:25:40.499 "process_max_bandwidth_mb_sec": 0 00:25:40.499 } 00:25:40.499 }, 00:25:40.499 { 00:25:40.499 "method": "bdev_iscsi_set_options", 00:25:40.499 "params": { 00:25:40.499 "timeout_sec": 30 00:25:40.499 } 00:25:40.499 }, 00:25:40.499 { 00:25:40.499 "method": "bdev_nvme_set_options", 00:25:40.499 "params": { 00:25:40.499 "action_on_timeout": "none", 00:25:40.499 "timeout_us": 0, 00:25:40.499 "timeout_admin_us": 0, 00:25:40.499 "keep_alive_timeout_ms": 10000, 00:25:40.499 "arbitration_burst": 0, 00:25:40.499 "low_priority_weight": 0, 00:25:40.499 "medium_priority_weight": 0, 00:25:40.499 "high_priority_weight": 0, 00:25:40.499 "nvme_adminq_poll_period_us": 10000, 00:25:40.499 "nvme_ioq_poll_period_us": 0, 00:25:40.499 "io_queue_requests": 512, 00:25:40.499 "delay_cmd_submit": true, 00:25:40.499 "transport_retry_count": 4, 00:25:40.499 "bdev_retry_count": 3, 00:25:40.499 "transport_ack_timeout": 0, 00:25:40.499 "ctrlr_loss_timeout_sec": 0, 00:25:40.499 "reconnect_delay_sec": 0, 00:25:40.499 "fast_io_fail_timeout_sec": 0, 00:25:40.499 "disable_auto_failback": false, 00:25:40.499 "generate_uuids": false, 00:25:40.499 "transport_tos": 0, 00:25:40.499 "nvme_error_stat": false, 00:25:40.499 "rdma_srq_size": 0, 00:25:40.499 "io_path_stat": false, 00:25:40.499 "allow_accel_sequence": false, 00:25:40.499 "rdma_max_cq_size": 0, 00:25:40.499 "rdma_cm_event_timeout_ms": 0, 00:25:40.499 "dhchap_digests": [ 00:25:40.499 "sha256", 00:25:40.499 "sha384", 00:25:40.499 "sha512" 00:25:40.499 ], 00:25:40.499 "dhchap_dhgroups": [ 00:25:40.499 "null", 00:25:40.499 "ffdhe2048", 00:25:40.499 "ffdhe3072", 00:25:40.499 "ffdhe4096", 00:25:40.499 "ffdhe6144", 00:25:40.499 "ffdhe8192" 00:25:40.499 ] 00:25:40.499 } 00:25:40.499 }, 00:25:40.499 { 00:25:40.499 "method": "bdev_nvme_attach_controller", 00:25:40.499 "params": { 00:25:40.499 "name": "nvme0", 00:25:40.499 "trtype": "TCP", 00:25:40.499 "adrfam": "IPv4", 00:25:40.499 "traddr": "10.0.0.2", 00:25:40.499 "trsvcid": "4420", 00:25:40.499 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:40.499 "prchk_reftag": false, 00:25:40.499 "prchk_guard": false, 00:25:40.499 "ctrlr_loss_timeout_sec": 0, 00:25:40.499 "reconnect_delay_sec": 0, 00:25:40.499 "fast_io_fail_timeout_sec": 0, 00:25:40.499 "psk": "key0", 00:25:40.499 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:40.499 "hdgst": false, 00:25:40.499 "ddgst": false, 00:25:40.499 "multipath": "multipath" 00:25:40.499 } 00:25:40.499 }, 00:25:40.499 { 00:25:40.499 "method": "bdev_nvme_set_hotplug", 00:25:40.499 "params": { 00:25:40.499 "period_us": 100000, 00:25:40.499 "enable": false 00:25:40.499 } 00:25:40.499 }, 00:25:40.499 { 00:25:40.499 "method": "bdev_enable_histogram", 00:25:40.499 "params": { 00:25:40.499 "name": "nvme0n1", 00:25:40.499 "enable": true 00:25:40.499 } 00:25:40.499 }, 00:25:40.499 { 00:25:40.499 "method": "bdev_wait_for_examine" 00:25:40.499 } 00:25:40.499 ] 00:25:40.499 }, 00:25:40.499 { 00:25:40.499 "subsystem": "nbd", 00:25:40.499 "config": [] 00:25:40.499 } 00:25:40.499 ] 00:25:40.499 }' 00:25:40.499 [2024-11-25 14:23:45.478017] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:25:40.499 [2024-11-25 14:23:45.478073] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3460248 ] 00:25:40.499 [2024-11-25 14:23:45.559318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.760 [2024-11-25 14:23:45.589332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.760 [2024-11-25 14:23:45.723950] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:41.331 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:41.331 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:41.331 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:41.331 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:25:41.592 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.592 14:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:41.592 Running I/O for 1 seconds... 00:25:42.535 5689.00 IOPS, 22.22 MiB/s 00:25:42.535 Latency(us) 00:25:42.535 [2024-11-25T13:23:47.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.535 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:42.535 Verification LBA range: start 0x0 length 0x2000 00:25:42.535 nvme0n1 : 1.01 5743.65 22.44 0.00 0.00 22148.05 4614.83 38010.88 00:25:42.535 [2024-11-25T13:23:47.625Z] =================================================================================================================== 00:25:42.535 [2024-11-25T13:23:47.625Z] Total : 5743.65 22.44 0.00 0.00 22148.05 4614.83 38010.88 00:25:42.535 { 00:25:42.535 "results": [ 00:25:42.535 { 00:25:42.535 "job": "nvme0n1", 00:25:42.535 "core_mask": "0x2", 00:25:42.535 "workload": "verify", 00:25:42.535 "status": "finished", 00:25:42.535 "verify_range": { 00:25:42.535 "start": 0, 00:25:42.535 "length": 8192 00:25:42.535 }, 00:25:42.535 "queue_depth": 128, 00:25:42.535 "io_size": 4096, 00:25:42.535 "runtime": 1.012771, 00:25:42.535 "iops": 5743.647873013741, 00:25:42.535 "mibps": 22.436124503959928, 00:25:42.535 "io_failed": 0, 00:25:42.535 "io_timeout": 0, 00:25:42.535 "avg_latency_us": 22148.046768666554, 00:25:42.535 "min_latency_us": 4614.826666666667, 00:25:42.535 "max_latency_us": 38010.88 00:25:42.535 } 00:25:42.535 ], 00:25:42.535 "core_count": 1 00:25:42.535 } 00:25:42.535 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:25:42.535 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:25:42.535 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:42.535 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:25:42.535 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:25:42.535 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:25:42.536 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:42.536 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:25:42.536 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:25:42.536 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:25:42.536 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:42.536 nvmf_trace.0 00:25:42.797 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:25:42.797 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3460248 00:25:42.797 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3460248 ']' 00:25:42.797 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3460248 00:25:42.797 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:42.797 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:42.797 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3460248 00:25:42.797 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:42.797 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:42.797 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3460248' 00:25:42.797 killing process with pid 3460248 00:25:42.797 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3460248 00:25:42.797 Received shutdown signal, test time was about 1.000000 seconds 00:25:42.797 00:25:42.797 Latency(us) 00:25:42.797 [2024-11-25T13:23:47.887Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.797 [2024-11-25T13:23:47.887Z] =================================================================================================================== 00:25:42.797 [2024-11-25T13:23:47.887Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:42.797 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3460248 00:25:42.797 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:42.797 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:42.797 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:25:42.798 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:42.798 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:25:42.798 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:42.798 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:42.798 rmmod nvme_tcp 00:25:42.798 rmmod nvme_fabrics 00:25:42.798 rmmod nvme_keyring 00:25:42.798 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:42.798 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:25:42.798 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:25:42.798 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3460043 ']' 00:25:42.798 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3460043 00:25:42.798 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3460043 ']' 00:25:42.798 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3460043 00:25:42.798 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:42.798 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:42.798 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3460043 00:25:43.058 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:43.058 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:43.058 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3460043' 00:25:43.058 killing process with pid 3460043 00:25:43.059 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3460043 00:25:43.059 14:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3460043 00:25:43.059 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:43.059 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:43.059 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:43.059 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:25:43.059 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:25:43.059 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:43.059 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:25:43.059 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:43.059 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:43.059 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.059 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:43.059 14:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.hTYH1HDqOX /tmp/tmp.jc67cHvVCq /tmp/tmp.92hhUuDJ2e 00:25:45.609 00:25:45.609 real 1m28.171s 00:25:45.609 user 2m19.641s 00:25:45.609 sys 0m27.093s 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:45.609 ************************************ 00:25:45.609 END TEST nvmf_tls 00:25:45.609 ************************************ 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:45.609 ************************************ 00:25:45.609 START TEST nvmf_fips 00:25:45.609 ************************************ 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:45.609 * Looking for test storage... 00:25:45.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:45.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.609 --rc genhtml_branch_coverage=1 00:25:45.609 --rc genhtml_function_coverage=1 00:25:45.609 --rc genhtml_legend=1 00:25:45.609 --rc geninfo_all_blocks=1 00:25:45.609 --rc geninfo_unexecuted_blocks=1 00:25:45.609 00:25:45.609 ' 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:45.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.609 --rc genhtml_branch_coverage=1 00:25:45.609 --rc genhtml_function_coverage=1 00:25:45.609 --rc genhtml_legend=1 00:25:45.609 --rc geninfo_all_blocks=1 00:25:45.609 --rc geninfo_unexecuted_blocks=1 00:25:45.609 00:25:45.609 ' 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:45.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.609 --rc genhtml_branch_coverage=1 00:25:45.609 --rc genhtml_function_coverage=1 00:25:45.609 --rc genhtml_legend=1 00:25:45.609 --rc geninfo_all_blocks=1 00:25:45.609 --rc geninfo_unexecuted_blocks=1 00:25:45.609 00:25:45.609 ' 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:45.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:45.609 --rc genhtml_branch_coverage=1 00:25:45.609 --rc genhtml_function_coverage=1 00:25:45.609 --rc genhtml_legend=1 00:25:45.609 --rc geninfo_all_blocks=1 00:25:45.609 --rc geninfo_unexecuted_blocks=1 00:25:45.609 00:25:45.609 ' 00:25:45.609 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:45.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:45.610 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:25:45.611 Error setting digest 00:25:45.611 40C2121ECA7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:25:45.611 40C2121ECA7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:25:45.611 14:23:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:53.787 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:53.788 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:53.788 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:53.788 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:53.788 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:53.788 14:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:53.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:53.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:25:53.788 00:25:53.788 --- 10.0.0.2 ping statistics --- 00:25:53.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.788 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:53.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:53.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.359 ms 00:25:53.788 00:25:53.788 --- 10.0.0.1 ping statistics --- 00:25:53.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.788 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3465031 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3465031 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3465031 ']' 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:53.788 14:23:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:53.788 [2024-11-25 14:23:58.203824] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:25:53.788 [2024-11-25 14:23:58.203898] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.788 [2024-11-25 14:23:58.306209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.788 [2024-11-25 14:23:58.355900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:53.788 [2024-11-25 14:23:58.355953] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:53.788 [2024-11-25 14:23:58.355961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:53.788 [2024-11-25 14:23:58.355969] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:53.788 [2024-11-25 14:23:58.355975] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:53.788 [2024-11-25 14:23:58.356736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.050 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:54.050 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:25:54.050 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:54.050 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:54.050 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:54.050 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:54.050 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:25:54.050 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:54.050 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:25:54.050 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.kTS 00:25:54.050 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:54.050 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.kTS 00:25:54.050 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.kTS 00:25:54.050 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.kTS 00:25:54.050 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:54.329 [2024-11-25 14:23:59.242971] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.329 [2024-11-25 14:23:59.258975] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:54.329 [2024-11-25 14:23:59.259321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.329 malloc0 00:25:54.329 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:54.329 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3465149 00:25:54.329 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3465149 /var/tmp/bdevperf.sock 00:25:54.329 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:54.329 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3465149 ']' 00:25:54.329 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:54.329 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:54.329 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:54.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:54.329 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:54.329 14:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:54.329 [2024-11-25 14:23:59.404448] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:25:54.329 [2024-11-25 14:23:59.404525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3465149 ] 00:25:54.595 [2024-11-25 14:23:59.497994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.595 [2024-11-25 14:23:59.549218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:55.168 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:55.168 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:25:55.168 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.kTS 00:25:55.429 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:55.690 [2024-11-25 14:24:00.582997] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:55.690 TLSTESTn1 00:25:55.690 14:24:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:55.951 Running I/O for 10 seconds... 00:25:57.837 3150.00 IOPS, 12.30 MiB/s [2024-11-25T13:24:03.868Z] 3677.50 IOPS, 14.37 MiB/s [2024-11-25T13:24:04.809Z] 4613.67 IOPS, 18.02 MiB/s [2024-11-25T13:24:06.194Z] 5014.25 IOPS, 19.59 MiB/s [2024-11-25T13:24:07.166Z] 5231.00 IOPS, 20.43 MiB/s [2024-11-25T13:24:08.107Z] 5309.67 IOPS, 20.74 MiB/s [2024-11-25T13:24:09.050Z] 5495.86 IOPS, 21.47 MiB/s [2024-11-25T13:24:09.992Z] 5571.62 IOPS, 21.76 MiB/s [2024-11-25T13:24:10.934Z] 5551.67 IOPS, 21.69 MiB/s [2024-11-25T13:24:10.934Z] 5567.10 IOPS, 21.75 MiB/s 00:26:05.844 Latency(us) 00:26:05.844 [2024-11-25T13:24:10.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.844 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:05.844 Verification LBA range: start 0x0 length 0x2000 00:26:05.844 TLSTESTn1 : 10.03 5564.24 21.74 0.00 0.00 22956.34 6144.00 50025.81 00:26:05.844 [2024-11-25T13:24:10.934Z] =================================================================================================================== 00:26:05.844 [2024-11-25T13:24:10.934Z] Total : 5564.24 21.74 0.00 0.00 22956.34 6144.00 50025.81 00:26:05.844 { 00:26:05.844 "results": [ 00:26:05.844 { 00:26:05.844 "job": "TLSTESTn1", 00:26:05.844 "core_mask": "0x4", 00:26:05.844 "workload": "verify", 00:26:05.844 "status": "finished", 00:26:05.844 "verify_range": { 00:26:05.844 "start": 0, 00:26:05.844 "length": 8192 00:26:05.844 }, 00:26:05.844 "queue_depth": 128, 00:26:05.844 "io_size": 4096, 00:26:05.844 "runtime": 10.027781, 00:26:05.844 "iops": 5564.241979357148, 00:26:05.844 "mibps": 21.73532023186386, 00:26:05.844 "io_failed": 0, 00:26:05.844 "io_timeout": 0, 00:26:05.844 "avg_latency_us": 22956.34155336906, 00:26:05.844 "min_latency_us": 6144.0, 00:26:05.844 "max_latency_us": 50025.81333333333 00:26:05.844 } 00:26:05.844 ], 00:26:05.844 "core_count": 1 00:26:05.844 } 00:26:05.844 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:26:05.844 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:26:05.844 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:26:05.844 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:26:05.844 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:26:05.844 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:05.844 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:26:05.844 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:26:05.844 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:26:05.844 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:05.844 nvmf_trace.0 00:26:06.105 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:26:06.105 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3465149 00:26:06.105 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3465149 ']' 00:26:06.105 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3465149 00:26:06.105 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:26:06.105 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:06.105 14:24:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3465149 00:26:06.105 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:06.105 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:06.105 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3465149' 00:26:06.105 killing process with pid 3465149 00:26:06.105 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3465149 00:26:06.105 Received shutdown signal, test time was about 10.000000 seconds 00:26:06.105 00:26:06.105 Latency(us) 00:26:06.105 [2024-11-25T13:24:11.195Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.105 [2024-11-25T13:24:11.195Z] =================================================================================================================== 00:26:06.105 [2024-11-25T13:24:11.195Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:06.105 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3465149 00:26:06.105 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:26:06.105 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:06.105 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:26:06.105 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:06.105 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:26:06.105 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:06.105 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:06.105 rmmod nvme_tcp 00:26:06.105 rmmod nvme_fabrics 00:26:06.105 rmmod nvme_keyring 00:26:06.105 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:06.367 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:26:06.367 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:26:06.367 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3465031 ']' 00:26:06.367 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3465031 00:26:06.367 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3465031 ']' 00:26:06.367 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3465031 00:26:06.367 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:26:06.367 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:06.367 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3465031 00:26:06.367 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:06.367 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:06.367 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3465031' 00:26:06.367 killing process with pid 3465031 00:26:06.367 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3465031 00:26:06.367 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3465031 00:26:06.367 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:06.367 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:06.367 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:06.367 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:26:06.367 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:26:06.367 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:06.367 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:26:06.367 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:06.367 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:06.367 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.367 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:06.367 14:24:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:08.455 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:08.455 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.kTS 00:26:08.455 00:26:08.455 real 0m23.259s 00:26:08.455 user 0m25.058s 00:26:08.455 sys 0m9.635s 00:26:08.455 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:08.455 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:26:08.455 ************************************ 00:26:08.455 END TEST nvmf_fips 00:26:08.455 ************************************ 00:26:08.455 14:24:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:26:08.455 14:24:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:08.455 14:24:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:08.455 14:24:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:08.455 ************************************ 00:26:08.455 START TEST nvmf_control_msg_list 00:26:08.455 ************************************ 00:26:08.455 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:26:08.716 * Looking for test storage... 00:26:08.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:08.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.716 --rc genhtml_branch_coverage=1 00:26:08.716 --rc genhtml_function_coverage=1 00:26:08.716 --rc genhtml_legend=1 00:26:08.716 --rc geninfo_all_blocks=1 00:26:08.716 --rc geninfo_unexecuted_blocks=1 00:26:08.716 00:26:08.716 ' 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:08.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.716 --rc genhtml_branch_coverage=1 00:26:08.716 --rc genhtml_function_coverage=1 00:26:08.716 --rc genhtml_legend=1 00:26:08.716 --rc geninfo_all_blocks=1 00:26:08.716 --rc geninfo_unexecuted_blocks=1 00:26:08.716 00:26:08.716 ' 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:08.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.716 --rc genhtml_branch_coverage=1 00:26:08.716 --rc genhtml_function_coverage=1 00:26:08.716 --rc genhtml_legend=1 00:26:08.716 --rc geninfo_all_blocks=1 00:26:08.716 --rc geninfo_unexecuted_blocks=1 00:26:08.716 00:26:08.716 ' 00:26:08.716 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:08.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.717 --rc genhtml_branch_coverage=1 00:26:08.717 --rc genhtml_function_coverage=1 00:26:08.717 --rc genhtml_legend=1 00:26:08.717 --rc geninfo_all_blocks=1 00:26:08.717 --rc geninfo_unexecuted_blocks=1 00:26:08.717 00:26:08.717 ' 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:08.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:26:08.717 14:24:13 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:16.858 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:16.858 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:16.858 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:16.858 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:16.858 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:16.859 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:16.859 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:16.859 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:16.859 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:16.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:16.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:26:16.859 00:26:16.859 --- 10.0.0.2 ping statistics --- 00:26:16.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.859 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:26:16.859 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:16.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:16.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:26:16.859 00:26:16.859 --- 10.0.0.1 ping statistics --- 00:26:16.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:16.859 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:26:16.859 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:16.859 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:26:16.859 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:16.859 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:16.859 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:16.859 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:16.859 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:16.859 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:16.859 14:24:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3472284 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3472284 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3472284 ']' 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:16.859 [2024-11-25 14:24:21.080802] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:26:16.859 [2024-11-25 14:24:21.080853] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:16.859 [2024-11-25 14:24:21.173613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.859 [2024-11-25 14:24:21.207840] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:16.859 [2024-11-25 14:24:21.207871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:16.859 [2024-11-25 14:24:21.207879] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:16.859 [2024-11-25 14:24:21.207886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:16.859 [2024-11-25 14:24:21.207891] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:16.859 [2024-11-25 14:24:21.208454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:16.859 [2024-11-25 14:24:21.916594] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.859 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:17.118 Malloc0 00:26:17.118 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.119 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:26:17.119 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.119 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:17.119 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.119 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:17.119 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.119 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:17.119 [2024-11-25 14:24:21.971092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.119 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.119 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3472401 00:26:17.119 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:17.119 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3472402 00:26:17.119 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:17.119 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3472403 00:26:17.119 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3472401 00:26:17.119 14:24:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:17.119 [2024-11-25 14:24:22.081932] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:17.119 [2024-11-25 14:24:22.082291] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:17.119 [2024-11-25 14:24:22.082543] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:18.056 Initializing NVMe Controllers 00:26:18.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:18.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:26:18.056 Initialization complete. Launching workers. 00:26:18.056 ======================================================== 00:26:18.056 Latency(us) 00:26:18.056 Device Information : IOPS MiB/s Average min max 00:26:18.056 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 26.00 0.10 39362.25 475.61 41301.68 00:26:18.056 ======================================================== 00:26:18.056 Total : 26.00 0.10 39362.25 475.61 41301.68 00:26:18.056 00:26:18.056 Initializing NVMe Controllers 00:26:18.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:18.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:26:18.056 Initialization complete. Launching workers. 00:26:18.056 ======================================================== 00:26:18.056 Latency(us) 00:26:18.056 Device Information : IOPS MiB/s Average min max 00:26:18.056 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1649.00 6.44 606.41 140.93 856.30 00:26:18.056 ======================================================== 00:26:18.056 Total : 1649.00 6.44 606.41 140.93 856.30 00:26:18.056 00:26:18.316 Initializing NVMe Controllers 00:26:18.316 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:18.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:26:18.316 Initialization complete. Launching workers. 00:26:18.316 ======================================================== 00:26:18.316 Latency(us) 00:26:18.316 Device Information : IOPS MiB/s Average min max 00:26:18.316 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40923.39 40827.32 41370.57 00:26:18.316 ======================================================== 00:26:18.316 Total : 25.00 0.10 40923.39 40827.32 41370.57 00:26:18.316 00:26:18.316 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3472402 00:26:18.316 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3472403 00:26:18.316 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:18.316 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:26:18.316 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:18.316 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:26:18.316 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:18.316 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:26:18.316 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:18.316 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:18.316 rmmod nvme_tcp 00:26:18.316 rmmod nvme_fabrics 00:26:18.316 rmmod nvme_keyring 00:26:18.316 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:18.316 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:26:18.316 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:26:18.316 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3472284 ']' 00:26:18.316 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3472284 00:26:18.316 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3472284 ']' 00:26:18.316 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3472284 00:26:18.316 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:26:18.316 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:18.316 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3472284 00:26:18.316 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:18.316 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:18.316 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3472284' 00:26:18.316 killing process with pid 3472284 00:26:18.316 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3472284 00:26:18.316 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3472284 00:26:18.577 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:18.577 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:18.577 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:18.577 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:26:18.577 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:26:18.577 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:18.577 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:26:18.577 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:18.577 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:18.577 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.577 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:18.577 14:24:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.488 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:20.488 00:26:20.488 real 0m12.031s 00:26:20.488 user 0m7.795s 00:26:20.488 sys 0m6.182s 00:26:20.488 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:20.488 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:20.488 ************************************ 00:26:20.488 END TEST nvmf_control_msg_list 00:26:20.488 ************************************ 00:26:20.749 14:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:26:20.749 14:24:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:20.749 14:24:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:20.749 14:24:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:20.749 ************************************ 00:26:20.749 START TEST nvmf_wait_for_buf 00:26:20.749 ************************************ 00:26:20.749 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:26:20.749 * Looking for test storage... 00:26:20.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:20.749 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:20.749 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:26:20.749 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:20.749 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:20.749 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:20.749 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:20.749 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:20.749 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:26:20.749 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:26:20.749 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:26:20.749 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:26:20.749 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:26:20.749 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:26:20.749 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:26:20.749 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:20.749 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:26:20.749 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:26:20.749 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:20.749 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:21.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.010 --rc genhtml_branch_coverage=1 00:26:21.010 --rc genhtml_function_coverage=1 00:26:21.010 --rc genhtml_legend=1 00:26:21.010 --rc geninfo_all_blocks=1 00:26:21.010 --rc geninfo_unexecuted_blocks=1 00:26:21.010 00:26:21.010 ' 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:21.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.010 --rc genhtml_branch_coverage=1 00:26:21.010 --rc genhtml_function_coverage=1 00:26:21.010 --rc genhtml_legend=1 00:26:21.010 --rc geninfo_all_blocks=1 00:26:21.010 --rc geninfo_unexecuted_blocks=1 00:26:21.010 00:26:21.010 ' 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:21.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.010 --rc genhtml_branch_coverage=1 00:26:21.010 --rc genhtml_function_coverage=1 00:26:21.010 --rc genhtml_legend=1 00:26:21.010 --rc geninfo_all_blocks=1 00:26:21.010 --rc geninfo_unexecuted_blocks=1 00:26:21.010 00:26:21.010 ' 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:21.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.010 --rc genhtml_branch_coverage=1 00:26:21.010 --rc genhtml_function_coverage=1 00:26:21.010 --rc genhtml_legend=1 00:26:21.010 --rc geninfo_all_blocks=1 00:26:21.010 --rc geninfo_unexecuted_blocks=1 00:26:21.010 00:26:21.010 ' 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.010 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.011 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.011 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:26:21.011 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.011 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:26:21.011 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:21.011 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:21.011 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:21.011 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:21.011 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:21.011 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:21.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:21.011 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:21.011 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:21.011 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:21.011 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:26:21.011 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:21.011 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:21.011 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:21.011 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:21.011 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:21.011 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.011 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:21.011 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.011 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:21.011 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:21.011 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:21.011 14:24:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:29.155 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:29.155 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:29.155 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:29.155 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:29.155 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:29.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:29.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:26:29.156 00:26:29.156 --- 10.0.0.2 ping statistics --- 00:26:29.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.156 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:29.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:29.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:26:29.156 00:26:29.156 --- 10.0.0.1 ping statistics --- 00:26:29.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.156 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3476906 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3476906 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3476906 ']' 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:29.156 14:24:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:29.156 [2024-11-25 14:24:33.476562] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:26:29.156 [2024-11-25 14:24:33.476631] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:29.156 [2024-11-25 14:24:33.575065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.156 [2024-11-25 14:24:33.625777] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:29.156 [2024-11-25 14:24:33.625829] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:29.156 [2024-11-25 14:24:33.625838] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:29.156 [2024-11-25 14:24:33.625845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:29.156 [2024-11-25 14:24:33.625851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:29.156 [2024-11-25 14:24:33.626665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:29.419 Malloc0 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:29.419 [2024-11-25 14:24:34.471529] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.419 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:29.682 [2024-11-25 14:24:34.507851] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.682 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.682 14:24:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:29.682 [2024-11-25 14:24:34.613274] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:31.071 Initializing NVMe Controllers 00:26:31.071 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:31.071 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:26:31.071 Initialization complete. Launching workers. 00:26:31.071 ======================================================== 00:26:31.071 Latency(us) 00:26:31.071 Device Information : IOPS MiB/s Average min max 00:26:31.071 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 25.00 3.12 165995.41 47863.30 191554.09 00:26:31.071 ======================================================== 00:26:31.071 Total : 25.00 3.12 165995.41 47863.30 191554.09 00:26:31.071 00:26:31.071 14:24:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:26:31.071 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:26:31.071 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.071 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:31.071 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.071 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=374 00:26:31.071 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 374 -eq 0 ]] 00:26:31.071 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:31.071 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:26:31.071 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:31.071 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:26:31.071 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:31.071 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:26:31.071 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:31.071 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:31.071 rmmod nvme_tcp 00:26:31.071 rmmod nvme_fabrics 00:26:31.071 rmmod nvme_keyring 00:26:31.071 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:31.071 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:26:31.071 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:26:31.071 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3476906 ']' 00:26:31.071 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3476906 00:26:31.071 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3476906 ']' 00:26:31.071 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3476906 00:26:31.071 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:26:31.071 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:31.071 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3476906 00:26:31.333 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:31.333 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:31.333 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3476906' 00:26:31.333 killing process with pid 3476906 00:26:31.333 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3476906 00:26:31.333 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3476906 00:26:31.333 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:31.333 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:31.333 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:31.333 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:26:31.333 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:26:31.333 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:31.333 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:26:31.333 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:31.333 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:31.333 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.333 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.333 14:24:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.884 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:33.884 00:26:33.884 real 0m12.767s 00:26:33.884 user 0m5.164s 00:26:33.884 sys 0m6.199s 00:26:33.884 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:33.884 14:24:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:33.884 ************************************ 00:26:33.884 END TEST nvmf_wait_for_buf 00:26:33.884 ************************************ 00:26:33.884 14:24:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:26:33.884 14:24:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:26:33.884 14:24:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:26:33.884 14:24:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:26:33.884 14:24:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:26:33.884 14:24:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:42.029 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:42.029 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:26:42.029 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:42.029 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:42.029 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:42.029 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:42.029 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:42.029 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:26:42.029 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:42.029 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:26:42.029 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:26:42.029 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:26:42.029 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:26:42.029 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:26:42.029 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:26:42.029 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:42.029 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:42.029 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:42.029 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:42.029 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:42.029 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:42.029 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:42.030 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:42.030 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:42.030 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:42.030 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:42.030 ************************************ 00:26:42.030 START TEST nvmf_perf_adq 00:26:42.030 ************************************ 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:42.030 * Looking for test storage... 00:26:42.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:42.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.030 --rc genhtml_branch_coverage=1 00:26:42.030 --rc genhtml_function_coverage=1 00:26:42.030 --rc genhtml_legend=1 00:26:42.030 --rc geninfo_all_blocks=1 00:26:42.030 --rc geninfo_unexecuted_blocks=1 00:26:42.030 00:26:42.030 ' 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:42.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.030 --rc genhtml_branch_coverage=1 00:26:42.030 --rc genhtml_function_coverage=1 00:26:42.030 --rc genhtml_legend=1 00:26:42.030 --rc geninfo_all_blocks=1 00:26:42.030 --rc geninfo_unexecuted_blocks=1 00:26:42.030 00:26:42.030 ' 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:42.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.030 --rc genhtml_branch_coverage=1 00:26:42.030 --rc genhtml_function_coverage=1 00:26:42.030 --rc genhtml_legend=1 00:26:42.030 --rc geninfo_all_blocks=1 00:26:42.030 --rc geninfo_unexecuted_blocks=1 00:26:42.030 00:26:42.030 ' 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:42.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.030 --rc genhtml_branch_coverage=1 00:26:42.030 --rc genhtml_function_coverage=1 00:26:42.030 --rc genhtml_legend=1 00:26:42.030 --rc geninfo_all_blocks=1 00:26:42.030 --rc geninfo_unexecuted_blocks=1 00:26:42.030 00:26:42.030 ' 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:42.030 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:42.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:26:42.031 14:24:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.622 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:48.623 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:48.623 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:48.623 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:48.623 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:26:48.623 14:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:26:50.009 14:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:26:51.925 14:24:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:57.217 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:57.217 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:57.217 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:57.217 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:57.217 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:57.218 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:57.218 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:57.218 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:57.218 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:57.218 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:57.218 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:57.218 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:57.218 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:57.218 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:57.218 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:57.218 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:57.218 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:57.218 14:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:57.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:57.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:26:57.218 00:26:57.218 --- 10.0.0.2 ping statistics --- 00:26:57.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.218 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:57.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:57.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:26:57.218 00:26:57.218 --- 10.0.0.1 ping statistics --- 00:26:57.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.218 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3487087 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3487087 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3487087 ']' 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:57.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:57.218 14:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.218 [2024-11-25 14:25:02.259960] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:26:57.218 [2024-11-25 14:25:02.260029] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:57.479 [2024-11-25 14:25:02.362505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:57.479 [2024-11-25 14:25:02.417231] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:57.479 [2024-11-25 14:25:02.417286] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:57.479 [2024-11-25 14:25:02.417296] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:57.479 [2024-11-25 14:25:02.417303] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:57.479 [2024-11-25 14:25:02.417309] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:57.479 [2024-11-25 14:25:02.419309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.479 [2024-11-25 14:25:02.419469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:57.479 [2024-11-25 14:25:02.419635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:57.479 [2024-11-25 14:25:02.419637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.053 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:58.053 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:26:58.053 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:58.053 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:58.053 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:58.053 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:58.053 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:26:58.053 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:58.053 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:58.053 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.053 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:58.315 [2024-11-25 14:25:03.284371] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:58.315 Malloc1 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:58.315 [2024-11-25 14:25:03.360300] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3487359 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:26:58.315 14:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:00.863 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:27:00.863 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.863 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:00.863 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.863 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:27:00.863 "tick_rate": 2400000000, 00:27:00.863 "poll_groups": [ 00:27:00.863 { 00:27:00.863 "name": "nvmf_tgt_poll_group_000", 00:27:00.863 "admin_qpairs": 1, 00:27:00.863 "io_qpairs": 1, 00:27:00.863 "current_admin_qpairs": 1, 00:27:00.863 "current_io_qpairs": 1, 00:27:00.863 "pending_bdev_io": 0, 00:27:00.863 "completed_nvme_io": 15886, 00:27:00.863 "transports": [ 00:27:00.863 { 00:27:00.863 "trtype": "TCP" 00:27:00.863 } 00:27:00.863 ] 00:27:00.863 }, 00:27:00.863 { 00:27:00.863 "name": "nvmf_tgt_poll_group_001", 00:27:00.863 "admin_qpairs": 0, 00:27:00.863 "io_qpairs": 1, 00:27:00.863 "current_admin_qpairs": 0, 00:27:00.863 "current_io_qpairs": 1, 00:27:00.863 "pending_bdev_io": 0, 00:27:00.863 "completed_nvme_io": 16549, 00:27:00.863 "transports": [ 00:27:00.863 { 00:27:00.863 "trtype": "TCP" 00:27:00.863 } 00:27:00.863 ] 00:27:00.863 }, 00:27:00.863 { 00:27:00.863 "name": "nvmf_tgt_poll_group_002", 00:27:00.863 "admin_qpairs": 0, 00:27:00.863 "io_qpairs": 1, 00:27:00.863 "current_admin_qpairs": 0, 00:27:00.863 "current_io_qpairs": 1, 00:27:00.863 "pending_bdev_io": 0, 00:27:00.863 "completed_nvme_io": 18321, 00:27:00.863 "transports": [ 00:27:00.863 { 00:27:00.863 "trtype": "TCP" 00:27:00.863 } 00:27:00.863 ] 00:27:00.863 }, 00:27:00.863 { 00:27:00.863 "name": "nvmf_tgt_poll_group_003", 00:27:00.863 "admin_qpairs": 0, 00:27:00.863 "io_qpairs": 1, 00:27:00.863 "current_admin_qpairs": 0, 00:27:00.863 "current_io_qpairs": 1, 00:27:00.863 "pending_bdev_io": 0, 00:27:00.863 "completed_nvme_io": 16090, 00:27:00.863 "transports": [ 00:27:00.863 { 00:27:00.863 "trtype": "TCP" 00:27:00.863 } 00:27:00.863 ] 00:27:00.863 } 00:27:00.863 ] 00:27:00.863 }' 00:27:00.863 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:00.863 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:27:00.863 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:27:00.863 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:27:00.863 14:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3487359 00:27:09.006 Initializing NVMe Controllers 00:27:09.006 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:09.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:09.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:09.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:09.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:09.006 Initialization complete. Launching workers. 00:27:09.006 ======================================================== 00:27:09.006 Latency(us) 00:27:09.006 Device Information : IOPS MiB/s Average min max 00:27:09.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12556.53 49.05 5097.90 1124.55 12622.12 00:27:09.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 12927.83 50.50 4950.17 1414.23 12236.88 00:27:09.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13650.42 53.32 4688.11 1513.23 12354.59 00:27:09.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12627.83 49.33 5068.02 1289.21 13527.17 00:27:09.007 ======================================================== 00:27:09.007 Total : 51762.60 202.20 4945.65 1124.55 13527.17 00:27:09.007 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:09.007 rmmod nvme_tcp 00:27:09.007 rmmod nvme_fabrics 00:27:09.007 rmmod nvme_keyring 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3487087 ']' 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3487087 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3487087 ']' 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3487087 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3487087 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3487087' 00:27:09.007 killing process with pid 3487087 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3487087 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3487087 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:09.007 14:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.917 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:10.917 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:27:10.917 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:10.917 14:25:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:12.826 14:25:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:14.738 14:25:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:20.022 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:20.022 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.022 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:20.023 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:20.023 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:20.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:20.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:27:20.023 00:27:20.023 --- 10.0.0.2 ping statistics --- 00:27:20.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.023 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:20.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:20.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:27:20.023 00:27:20.023 --- 10.0.0.1 ping statistics --- 00:27:20.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.023 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:20.023 net.core.busy_poll = 1 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:20.023 net.core.busy_read = 1 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:20.023 14:25:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:20.023 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:20.285 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:20.285 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:20.285 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:20.285 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:20.285 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:20.285 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:20.285 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3491860 00:27:20.285 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3491860 00:27:20.285 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:20.285 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3491860 ']' 00:27:20.285 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.285 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:20.285 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.285 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:20.285 14:25:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:20.285 [2024-11-25 14:25:25.241001] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:27:20.285 [2024-11-25 14:25:25.241067] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:20.285 [2024-11-25 14:25:25.344392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:20.546 [2024-11-25 14:25:25.398376] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:20.546 [2024-11-25 14:25:25.398433] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:20.546 [2024-11-25 14:25:25.398442] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:20.546 [2024-11-25 14:25:25.398449] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:20.546 [2024-11-25 14:25:25.398455] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:20.546 [2024-11-25 14:25:25.400533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.546 [2024-11-25 14:25:25.400570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:20.546 [2024-11-25 14:25:25.400699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.546 [2024-11-25 14:25:25.400699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:21.117 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:21.117 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:21.117 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:21.118 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:21.118 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:21.118 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:21.118 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:27:21.118 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:21.118 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:21.118 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.118 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:21.118 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.118 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:21.118 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:21.118 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.118 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:21.118 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.118 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:21.118 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.118 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:21.394 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.394 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:21.394 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.394 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:21.394 [2024-11-25 14:25:26.264757] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:21.394 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.394 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:21.394 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.394 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:21.394 Malloc1 00:27:21.394 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.394 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:21.394 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.394 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:21.394 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.394 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:21.394 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.394 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:21.394 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.394 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:21.394 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.394 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:21.394 [2024-11-25 14:25:26.339252] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:21.394 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.394 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3492171 00:27:21.394 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:27:21.394 14:25:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:23.511 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:27:23.511 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.511 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:23.511 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.511 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:27:23.511 "tick_rate": 2400000000, 00:27:23.511 "poll_groups": [ 00:27:23.511 { 00:27:23.511 "name": "nvmf_tgt_poll_group_000", 00:27:23.511 "admin_qpairs": 1, 00:27:23.511 "io_qpairs": 4, 00:27:23.511 "current_admin_qpairs": 1, 00:27:23.511 "current_io_qpairs": 4, 00:27:23.511 "pending_bdev_io": 0, 00:27:23.511 "completed_nvme_io": 34609, 00:27:23.511 "transports": [ 00:27:23.511 { 00:27:23.511 "trtype": "TCP" 00:27:23.511 } 00:27:23.511 ] 00:27:23.511 }, 00:27:23.511 { 00:27:23.511 "name": "nvmf_tgt_poll_group_001", 00:27:23.511 "admin_qpairs": 0, 00:27:23.511 "io_qpairs": 0, 00:27:23.511 "current_admin_qpairs": 0, 00:27:23.511 "current_io_qpairs": 0, 00:27:23.511 "pending_bdev_io": 0, 00:27:23.511 "completed_nvme_io": 0, 00:27:23.511 "transports": [ 00:27:23.511 { 00:27:23.511 "trtype": "TCP" 00:27:23.511 } 00:27:23.511 ] 00:27:23.511 }, 00:27:23.511 { 00:27:23.511 "name": "nvmf_tgt_poll_group_002", 00:27:23.511 "admin_qpairs": 0, 00:27:23.511 "io_qpairs": 0, 00:27:23.511 "current_admin_qpairs": 0, 00:27:23.511 "current_io_qpairs": 0, 00:27:23.511 "pending_bdev_io": 0, 00:27:23.511 "completed_nvme_io": 0, 00:27:23.511 "transports": [ 00:27:23.511 { 00:27:23.511 "trtype": "TCP" 00:27:23.511 } 00:27:23.511 ] 00:27:23.511 }, 00:27:23.511 { 00:27:23.511 "name": "nvmf_tgt_poll_group_003", 00:27:23.511 "admin_qpairs": 0, 00:27:23.511 "io_qpairs": 0, 00:27:23.511 "current_admin_qpairs": 0, 00:27:23.511 "current_io_qpairs": 0, 00:27:23.511 "pending_bdev_io": 0, 00:27:23.511 "completed_nvme_io": 0, 00:27:23.511 "transports": [ 00:27:23.511 { 00:27:23.512 "trtype": "TCP" 00:27:23.512 } 00:27:23.512 ] 00:27:23.512 } 00:27:23.512 ] 00:27:23.512 }' 00:27:23.512 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:23.512 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:27:23.512 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:27:23.512 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:27:23.512 14:25:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3492171 00:27:31.647 Initializing NVMe Controllers 00:27:31.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:31.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:31.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:31.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:31.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:31.647 Initialization complete. Launching workers. 00:27:31.647 ======================================================== 00:27:31.647 Latency(us) 00:27:31.647 Device Information : IOPS MiB/s Average min max 00:27:31.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6267.70 24.48 10213.43 1402.05 58382.31 00:27:31.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6265.10 24.47 10242.45 1351.69 58347.51 00:27:31.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6116.20 23.89 10465.62 1189.42 59088.76 00:27:31.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5760.10 22.50 11113.27 1070.33 58976.14 00:27:31.647 ======================================================== 00:27:31.647 Total : 24409.10 95.35 10496.41 1070.33 59088.76 00:27:31.647 00:27:31.647 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:27:31.647 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:31.647 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:27:31.647 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:31.647 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:27:31.647 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:31.647 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:31.647 rmmod nvme_tcp 00:27:31.647 rmmod nvme_fabrics 00:27:31.647 rmmod nvme_keyring 00:27:31.647 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:31.647 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:27:31.647 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:27:31.647 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3491860 ']' 00:27:31.647 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3491860 00:27:31.647 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3491860 ']' 00:27:31.647 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3491860 00:27:31.647 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:27:31.647 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:31.647 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3491860 00:27:31.647 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:31.647 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:31.647 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3491860' 00:27:31.647 killing process with pid 3491860 00:27:31.647 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3491860 00:27:31.647 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3491860 00:27:31.907 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:31.907 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:31.907 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:31.907 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:27:31.907 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:27:31.907 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:31.907 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:27:31.907 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:31.907 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:31.907 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.907 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:31.907 14:25:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.206 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:35.206 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:27:35.206 00:27:35.206 real 0m54.142s 00:27:35.206 user 2m50.032s 00:27:35.206 sys 0m11.036s 00:27:35.206 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:35.206 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:35.206 ************************************ 00:27:35.206 END TEST nvmf_perf_adq 00:27:35.206 ************************************ 00:27:35.206 14:25:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:35.206 14:25:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:35.206 14:25:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:35.206 14:25:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:35.206 ************************************ 00:27:35.206 START TEST nvmf_shutdown 00:27:35.206 ************************************ 00:27:35.206 14:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:35.206 * Looking for test storage... 00:27:35.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:35.206 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:35.206 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:27:35.206 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:35.206 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:35.206 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:35.206 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:35.206 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:35.206 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:35.206 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:35.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.207 --rc genhtml_branch_coverage=1 00:27:35.207 --rc genhtml_function_coverage=1 00:27:35.207 --rc genhtml_legend=1 00:27:35.207 --rc geninfo_all_blocks=1 00:27:35.207 --rc geninfo_unexecuted_blocks=1 00:27:35.207 00:27:35.207 ' 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:35.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.207 --rc genhtml_branch_coverage=1 00:27:35.207 --rc genhtml_function_coverage=1 00:27:35.207 --rc genhtml_legend=1 00:27:35.207 --rc geninfo_all_blocks=1 00:27:35.207 --rc geninfo_unexecuted_blocks=1 00:27:35.207 00:27:35.207 ' 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:35.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.207 --rc genhtml_branch_coverage=1 00:27:35.207 --rc genhtml_function_coverage=1 00:27:35.207 --rc genhtml_legend=1 00:27:35.207 --rc geninfo_all_blocks=1 00:27:35.207 --rc geninfo_unexecuted_blocks=1 00:27:35.207 00:27:35.207 ' 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:35.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.207 --rc genhtml_branch_coverage=1 00:27:35.207 --rc genhtml_function_coverage=1 00:27:35.207 --rc genhtml_legend=1 00:27:35.207 --rc geninfo_all_blocks=1 00:27:35.207 --rc geninfo_unexecuted_blocks=1 00:27:35.207 00:27:35.207 ' 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:35.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:35.207 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:35.208 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:35.208 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:35.208 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:35.208 ************************************ 00:27:35.208 START TEST nvmf_shutdown_tc1 00:27:35.208 ************************************ 00:27:35.208 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:27:35.208 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:27:35.208 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:35.208 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:35.208 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:35.208 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:35.208 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:35.208 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:35.208 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.208 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.208 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.208 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:35.208 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:35.208 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:35.208 14:25:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:43.346 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:43.346 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:43.346 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:43.346 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:43.346 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:43.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:43.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:27:43.347 00:27:43.347 --- 10.0.0.2 ping statistics --- 00:27:43.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.347 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:43.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:43.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:27:43.347 00:27:43.347 --- 10.0.0.1 ping statistics --- 00:27:43.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.347 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3498645 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3498645 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3498645 ']' 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:43.347 14:25:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:43.347 [2024-11-25 14:25:47.854828] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:27:43.347 [2024-11-25 14:25:47.854891] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:43.347 [2024-11-25 14:25:47.961096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:43.347 [2024-11-25 14:25:48.013015] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:43.347 [2024-11-25 14:25:48.013069] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:43.347 [2024-11-25 14:25:48.013077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:43.347 [2024-11-25 14:25:48.013084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:43.347 [2024-11-25 14:25:48.013091] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:43.347 [2024-11-25 14:25:48.015142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:43.347 [2024-11-25 14:25:48.015283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:43.347 [2024-11-25 14:25:48.015494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:43.347 [2024-11-25 14:25:48.015508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:43.608 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:43.608 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:27:43.608 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:43.608 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:43.608 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:43.870 [2024-11-25 14:25:48.736151] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.870 14:25:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:43.870 Malloc1 00:27:43.870 [2024-11-25 14:25:48.869420] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:43.870 Malloc2 00:27:43.870 Malloc3 00:27:44.131 Malloc4 00:27:44.131 Malloc5 00:27:44.131 Malloc6 00:27:44.131 Malloc7 00:27:44.131 Malloc8 00:27:44.131 Malloc9 00:27:44.393 Malloc10 00:27:44.393 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.393 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3499029 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3499029 /var/tmp/bdevperf.sock 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3499029 ']' 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:44.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:44.394 { 00:27:44.394 "params": { 00:27:44.394 "name": "Nvme$subsystem", 00:27:44.394 "trtype": "$TEST_TRANSPORT", 00:27:44.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.394 "adrfam": "ipv4", 00:27:44.394 "trsvcid": "$NVMF_PORT", 00:27:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.394 "hdgst": ${hdgst:-false}, 00:27:44.394 "ddgst": ${ddgst:-false} 00:27:44.394 }, 00:27:44.394 "method": "bdev_nvme_attach_controller" 00:27:44.394 } 00:27:44.394 EOF 00:27:44.394 )") 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:44.394 { 00:27:44.394 "params": { 00:27:44.394 "name": "Nvme$subsystem", 00:27:44.394 "trtype": "$TEST_TRANSPORT", 00:27:44.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.394 "adrfam": "ipv4", 00:27:44.394 "trsvcid": "$NVMF_PORT", 00:27:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.394 "hdgst": ${hdgst:-false}, 00:27:44.394 "ddgst": ${ddgst:-false} 00:27:44.394 }, 00:27:44.394 "method": "bdev_nvme_attach_controller" 00:27:44.394 } 00:27:44.394 EOF 00:27:44.394 )") 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:44.394 { 00:27:44.394 "params": { 00:27:44.394 "name": "Nvme$subsystem", 00:27:44.394 "trtype": "$TEST_TRANSPORT", 00:27:44.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.394 "adrfam": "ipv4", 00:27:44.394 "trsvcid": "$NVMF_PORT", 00:27:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.394 "hdgst": ${hdgst:-false}, 00:27:44.394 "ddgst": ${ddgst:-false} 00:27:44.394 }, 00:27:44.394 "method": "bdev_nvme_attach_controller" 00:27:44.394 } 00:27:44.394 EOF 00:27:44.394 )") 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:44.394 { 00:27:44.394 "params": { 00:27:44.394 "name": "Nvme$subsystem", 00:27:44.394 "trtype": "$TEST_TRANSPORT", 00:27:44.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.394 "adrfam": "ipv4", 00:27:44.394 "trsvcid": "$NVMF_PORT", 00:27:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.394 "hdgst": ${hdgst:-false}, 00:27:44.394 "ddgst": ${ddgst:-false} 00:27:44.394 }, 00:27:44.394 "method": "bdev_nvme_attach_controller" 00:27:44.394 } 00:27:44.394 EOF 00:27:44.394 )") 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:44.394 { 00:27:44.394 "params": { 00:27:44.394 "name": "Nvme$subsystem", 00:27:44.394 "trtype": "$TEST_TRANSPORT", 00:27:44.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.394 "adrfam": "ipv4", 00:27:44.394 "trsvcid": "$NVMF_PORT", 00:27:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.394 "hdgst": ${hdgst:-false}, 00:27:44.394 "ddgst": ${ddgst:-false} 00:27:44.394 }, 00:27:44.394 "method": "bdev_nvme_attach_controller" 00:27:44.394 } 00:27:44.394 EOF 00:27:44.394 )") 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:44.394 { 00:27:44.394 "params": { 00:27:44.394 "name": "Nvme$subsystem", 00:27:44.394 "trtype": "$TEST_TRANSPORT", 00:27:44.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.394 "adrfam": "ipv4", 00:27:44.394 "trsvcid": "$NVMF_PORT", 00:27:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.394 "hdgst": ${hdgst:-false}, 00:27:44.394 "ddgst": ${ddgst:-false} 00:27:44.394 }, 00:27:44.394 "method": "bdev_nvme_attach_controller" 00:27:44.394 } 00:27:44.394 EOF 00:27:44.394 )") 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:44.394 { 00:27:44.394 "params": { 00:27:44.394 "name": "Nvme$subsystem", 00:27:44.394 "trtype": "$TEST_TRANSPORT", 00:27:44.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.394 "adrfam": "ipv4", 00:27:44.394 "trsvcid": "$NVMF_PORT", 00:27:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.394 "hdgst": ${hdgst:-false}, 00:27:44.394 "ddgst": ${ddgst:-false} 00:27:44.394 }, 00:27:44.394 "method": "bdev_nvme_attach_controller" 00:27:44.394 } 00:27:44.394 EOF 00:27:44.394 )") 00:27:44.394 [2024-11-25 14:25:49.386591] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:27:44.394 [2024-11-25 14:25:49.386663] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:44.394 { 00:27:44.394 "params": { 00:27:44.394 "name": "Nvme$subsystem", 00:27:44.394 "trtype": "$TEST_TRANSPORT", 00:27:44.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.394 "adrfam": "ipv4", 00:27:44.394 "trsvcid": "$NVMF_PORT", 00:27:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.394 "hdgst": ${hdgst:-false}, 00:27:44.394 "ddgst": ${ddgst:-false} 00:27:44.394 }, 00:27:44.394 "method": "bdev_nvme_attach_controller" 00:27:44.394 } 00:27:44.394 EOF 00:27:44.394 )") 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:44.394 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:44.394 { 00:27:44.394 "params": { 00:27:44.394 "name": "Nvme$subsystem", 00:27:44.394 "trtype": "$TEST_TRANSPORT", 00:27:44.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.394 "adrfam": "ipv4", 00:27:44.394 "trsvcid": "$NVMF_PORT", 00:27:44.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.394 "hdgst": ${hdgst:-false}, 00:27:44.394 "ddgst": ${ddgst:-false} 00:27:44.394 }, 00:27:44.394 "method": "bdev_nvme_attach_controller" 00:27:44.394 } 00:27:44.395 EOF 00:27:44.395 )") 00:27:44.395 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:44.395 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:44.395 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:44.395 { 00:27:44.395 "params": { 00:27:44.395 "name": "Nvme$subsystem", 00:27:44.395 "trtype": "$TEST_TRANSPORT", 00:27:44.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.395 "adrfam": "ipv4", 00:27:44.395 "trsvcid": "$NVMF_PORT", 00:27:44.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.395 "hdgst": ${hdgst:-false}, 00:27:44.395 "ddgst": ${ddgst:-false} 00:27:44.395 }, 00:27:44.395 "method": "bdev_nvme_attach_controller" 00:27:44.395 } 00:27:44.395 EOF 00:27:44.395 )") 00:27:44.395 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:44.395 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:27:44.395 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:27:44.395 14:25:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:44.395 "params": { 00:27:44.395 "name": "Nvme1", 00:27:44.395 "trtype": "tcp", 00:27:44.395 "traddr": "10.0.0.2", 00:27:44.395 "adrfam": "ipv4", 00:27:44.395 "trsvcid": "4420", 00:27:44.395 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:44.395 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:44.395 "hdgst": false, 00:27:44.395 "ddgst": false 00:27:44.395 }, 00:27:44.395 "method": "bdev_nvme_attach_controller" 00:27:44.395 },{ 00:27:44.395 "params": { 00:27:44.395 "name": "Nvme2", 00:27:44.395 "trtype": "tcp", 00:27:44.395 "traddr": "10.0.0.2", 00:27:44.395 "adrfam": "ipv4", 00:27:44.395 "trsvcid": "4420", 00:27:44.395 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:44.395 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:44.395 "hdgst": false, 00:27:44.395 "ddgst": false 00:27:44.395 }, 00:27:44.395 "method": "bdev_nvme_attach_controller" 00:27:44.395 },{ 00:27:44.395 "params": { 00:27:44.395 "name": "Nvme3", 00:27:44.395 "trtype": "tcp", 00:27:44.395 "traddr": "10.0.0.2", 00:27:44.395 "adrfam": "ipv4", 00:27:44.395 "trsvcid": "4420", 00:27:44.395 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:44.395 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:44.395 "hdgst": false, 00:27:44.395 "ddgst": false 00:27:44.395 }, 00:27:44.395 "method": "bdev_nvme_attach_controller" 00:27:44.395 },{ 00:27:44.395 "params": { 00:27:44.395 "name": "Nvme4", 00:27:44.395 "trtype": "tcp", 00:27:44.395 "traddr": "10.0.0.2", 00:27:44.395 "adrfam": "ipv4", 00:27:44.395 "trsvcid": "4420", 00:27:44.395 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:44.395 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:44.395 "hdgst": false, 00:27:44.395 "ddgst": false 00:27:44.395 }, 00:27:44.395 "method": "bdev_nvme_attach_controller" 00:27:44.395 },{ 00:27:44.395 "params": { 00:27:44.395 "name": "Nvme5", 00:27:44.395 "trtype": "tcp", 00:27:44.395 "traddr": "10.0.0.2", 00:27:44.395 "adrfam": "ipv4", 00:27:44.395 "trsvcid": "4420", 00:27:44.395 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:44.395 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:44.395 "hdgst": false, 00:27:44.395 "ddgst": false 00:27:44.395 }, 00:27:44.395 "method": "bdev_nvme_attach_controller" 00:27:44.395 },{ 00:27:44.395 "params": { 00:27:44.395 "name": "Nvme6", 00:27:44.395 "trtype": "tcp", 00:27:44.395 "traddr": "10.0.0.2", 00:27:44.395 "adrfam": "ipv4", 00:27:44.395 "trsvcid": "4420", 00:27:44.395 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:44.395 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:44.395 "hdgst": false, 00:27:44.395 "ddgst": false 00:27:44.395 }, 00:27:44.395 "method": "bdev_nvme_attach_controller" 00:27:44.395 },{ 00:27:44.395 "params": { 00:27:44.395 "name": "Nvme7", 00:27:44.395 "trtype": "tcp", 00:27:44.395 "traddr": "10.0.0.2", 00:27:44.395 "adrfam": "ipv4", 00:27:44.395 "trsvcid": "4420", 00:27:44.395 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:44.395 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:44.395 "hdgst": false, 00:27:44.395 "ddgst": false 00:27:44.395 }, 00:27:44.395 "method": "bdev_nvme_attach_controller" 00:27:44.395 },{ 00:27:44.395 "params": { 00:27:44.395 "name": "Nvme8", 00:27:44.395 "trtype": "tcp", 00:27:44.395 "traddr": "10.0.0.2", 00:27:44.395 "adrfam": "ipv4", 00:27:44.395 "trsvcid": "4420", 00:27:44.395 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:44.395 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:44.395 "hdgst": false, 00:27:44.395 "ddgst": false 00:27:44.395 }, 00:27:44.395 "method": "bdev_nvme_attach_controller" 00:27:44.395 },{ 00:27:44.395 "params": { 00:27:44.395 "name": "Nvme9", 00:27:44.395 "trtype": "tcp", 00:27:44.395 "traddr": "10.0.0.2", 00:27:44.395 "adrfam": "ipv4", 00:27:44.395 "trsvcid": "4420", 00:27:44.395 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:44.395 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:44.395 "hdgst": false, 00:27:44.395 "ddgst": false 00:27:44.395 }, 00:27:44.395 "method": "bdev_nvme_attach_controller" 00:27:44.395 },{ 00:27:44.395 "params": { 00:27:44.395 "name": "Nvme10", 00:27:44.395 "trtype": "tcp", 00:27:44.395 "traddr": "10.0.0.2", 00:27:44.395 "adrfam": "ipv4", 00:27:44.395 "trsvcid": "4420", 00:27:44.395 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:44.395 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:44.395 "hdgst": false, 00:27:44.395 "ddgst": false 00:27:44.395 }, 00:27:44.395 "method": "bdev_nvme_attach_controller" 00:27:44.395 }' 00:27:44.657 [2024-11-25 14:25:49.481932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.657 [2024-11-25 14:25:49.535721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.042 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:46.042 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:27:46.042 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:46.042 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.042 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:46.042 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.042 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3499029 00:27:46.042 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:27:46.042 14:25:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:27:46.985 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3499029 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:46.985 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3498645 00:27:46.985 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:46.985 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:46.985 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:27:46.985 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:27:46.985 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:46.985 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:46.985 { 00:27:46.985 "params": { 00:27:46.985 "name": "Nvme$subsystem", 00:27:46.985 "trtype": "$TEST_TRANSPORT", 00:27:46.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.985 "adrfam": "ipv4", 00:27:46.985 "trsvcid": "$NVMF_PORT", 00:27:46.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.985 "hdgst": ${hdgst:-false}, 00:27:46.985 "ddgst": ${ddgst:-false} 00:27:46.985 }, 00:27:46.985 "method": "bdev_nvme_attach_controller" 00:27:46.985 } 00:27:46.985 EOF 00:27:46.985 )") 00:27:46.985 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:46.985 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:46.985 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:46.985 { 00:27:46.985 "params": { 00:27:46.985 "name": "Nvme$subsystem", 00:27:46.985 "trtype": "$TEST_TRANSPORT", 00:27:46.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.985 "adrfam": "ipv4", 00:27:46.985 "trsvcid": "$NVMF_PORT", 00:27:46.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.985 "hdgst": ${hdgst:-false}, 00:27:46.985 "ddgst": ${ddgst:-false} 00:27:46.985 }, 00:27:46.985 "method": "bdev_nvme_attach_controller" 00:27:46.985 } 00:27:46.985 EOF 00:27:46.985 )") 00:27:46.985 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:46.985 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:46.985 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:46.985 { 00:27:46.985 "params": { 00:27:46.985 "name": "Nvme$subsystem", 00:27:46.985 "trtype": "$TEST_TRANSPORT", 00:27:46.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.985 "adrfam": "ipv4", 00:27:46.985 "trsvcid": "$NVMF_PORT", 00:27:46.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.985 "hdgst": ${hdgst:-false}, 00:27:46.985 "ddgst": ${ddgst:-false} 00:27:46.985 }, 00:27:46.985 "method": "bdev_nvme_attach_controller" 00:27:46.985 } 00:27:46.985 EOF 00:27:46.985 )") 00:27:46.985 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:46.985 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:46.985 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:46.985 { 00:27:46.985 "params": { 00:27:46.985 "name": "Nvme$subsystem", 00:27:46.985 "trtype": "$TEST_TRANSPORT", 00:27:46.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.985 "adrfam": "ipv4", 00:27:46.985 "trsvcid": "$NVMF_PORT", 00:27:46.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.985 "hdgst": ${hdgst:-false}, 00:27:46.985 "ddgst": ${ddgst:-false} 00:27:46.985 }, 00:27:46.985 "method": "bdev_nvme_attach_controller" 00:27:46.985 } 00:27:46.985 EOF 00:27:46.985 )") 00:27:46.985 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:46.985 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:46.985 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:46.985 { 00:27:46.985 "params": { 00:27:46.985 "name": "Nvme$subsystem", 00:27:46.985 "trtype": "$TEST_TRANSPORT", 00:27:46.985 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.985 "adrfam": "ipv4", 00:27:46.985 "trsvcid": "$NVMF_PORT", 00:27:46.985 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.985 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.985 "hdgst": ${hdgst:-false}, 00:27:46.985 "ddgst": ${ddgst:-false} 00:27:46.985 }, 00:27:46.985 "method": "bdev_nvme_attach_controller" 00:27:46.985 } 00:27:46.985 EOF 00:27:46.985 )") 00:27:46.985 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:46.986 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:46.986 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:46.986 { 00:27:46.986 "params": { 00:27:46.986 "name": "Nvme$subsystem", 00:27:46.986 "trtype": "$TEST_TRANSPORT", 00:27:46.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.986 "adrfam": "ipv4", 00:27:46.986 "trsvcid": "$NVMF_PORT", 00:27:46.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.986 "hdgst": ${hdgst:-false}, 00:27:46.986 "ddgst": ${ddgst:-false} 00:27:46.986 }, 00:27:46.986 "method": "bdev_nvme_attach_controller" 00:27:46.986 } 00:27:46.986 EOF 00:27:46.986 )") 00:27:46.986 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:46.986 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:46.986 [2024-11-25 14:25:51.858210] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:27:46.986 [2024-11-25 14:25:51.858264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3499404 ] 00:27:46.986 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:46.986 { 00:27:46.986 "params": { 00:27:46.986 "name": "Nvme$subsystem", 00:27:46.986 "trtype": "$TEST_TRANSPORT", 00:27:46.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.986 "adrfam": "ipv4", 00:27:46.986 "trsvcid": "$NVMF_PORT", 00:27:46.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.986 "hdgst": ${hdgst:-false}, 00:27:46.986 "ddgst": ${ddgst:-false} 00:27:46.986 }, 00:27:46.986 "method": "bdev_nvme_attach_controller" 00:27:46.986 } 00:27:46.986 EOF 00:27:46.986 )") 00:27:46.986 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:46.986 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:46.986 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:46.986 { 00:27:46.986 "params": { 00:27:46.986 "name": "Nvme$subsystem", 00:27:46.986 "trtype": "$TEST_TRANSPORT", 00:27:46.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.986 "adrfam": "ipv4", 00:27:46.986 "trsvcid": "$NVMF_PORT", 00:27:46.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.986 "hdgst": ${hdgst:-false}, 00:27:46.986 "ddgst": ${ddgst:-false} 00:27:46.986 }, 00:27:46.986 "method": "bdev_nvme_attach_controller" 00:27:46.986 } 00:27:46.986 EOF 00:27:46.986 )") 00:27:46.986 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:46.986 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:46.986 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:46.986 { 00:27:46.986 "params": { 00:27:46.986 "name": "Nvme$subsystem", 00:27:46.986 "trtype": "$TEST_TRANSPORT", 00:27:46.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.986 "adrfam": "ipv4", 00:27:46.986 "trsvcid": "$NVMF_PORT", 00:27:46.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.986 "hdgst": ${hdgst:-false}, 00:27:46.986 "ddgst": ${ddgst:-false} 00:27:46.986 }, 00:27:46.986 "method": "bdev_nvme_attach_controller" 00:27:46.986 } 00:27:46.986 EOF 00:27:46.986 )") 00:27:46.986 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:46.986 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:46.986 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:46.986 { 00:27:46.986 "params": { 00:27:46.986 "name": "Nvme$subsystem", 00:27:46.986 "trtype": "$TEST_TRANSPORT", 00:27:46.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.986 "adrfam": "ipv4", 00:27:46.986 "trsvcid": "$NVMF_PORT", 00:27:46.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.986 "hdgst": ${hdgst:-false}, 00:27:46.986 "ddgst": ${ddgst:-false} 00:27:46.986 }, 00:27:46.986 "method": "bdev_nvme_attach_controller" 00:27:46.986 } 00:27:46.986 EOF 00:27:46.986 )") 00:27:46.986 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:46.986 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:27:46.986 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:27:46.986 14:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:46.986 "params": { 00:27:46.986 "name": "Nvme1", 00:27:46.986 "trtype": "tcp", 00:27:46.986 "traddr": "10.0.0.2", 00:27:46.986 "adrfam": "ipv4", 00:27:46.986 "trsvcid": "4420", 00:27:46.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:46.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:46.986 "hdgst": false, 00:27:46.986 "ddgst": false 00:27:46.986 }, 00:27:46.986 "method": "bdev_nvme_attach_controller" 00:27:46.986 },{ 00:27:46.986 "params": { 00:27:46.986 "name": "Nvme2", 00:27:46.986 "trtype": "tcp", 00:27:46.986 "traddr": "10.0.0.2", 00:27:46.986 "adrfam": "ipv4", 00:27:46.986 "trsvcid": "4420", 00:27:46.986 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:46.986 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:46.986 "hdgst": false, 00:27:46.986 "ddgst": false 00:27:46.986 }, 00:27:46.986 "method": "bdev_nvme_attach_controller" 00:27:46.986 },{ 00:27:46.986 "params": { 00:27:46.986 "name": "Nvme3", 00:27:46.986 "trtype": "tcp", 00:27:46.986 "traddr": "10.0.0.2", 00:27:46.986 "adrfam": "ipv4", 00:27:46.986 "trsvcid": "4420", 00:27:46.986 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:46.986 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:46.986 "hdgst": false, 00:27:46.986 "ddgst": false 00:27:46.986 }, 00:27:46.986 "method": "bdev_nvme_attach_controller" 00:27:46.986 },{ 00:27:46.986 "params": { 00:27:46.986 "name": "Nvme4", 00:27:46.986 "trtype": "tcp", 00:27:46.986 "traddr": "10.0.0.2", 00:27:46.986 "adrfam": "ipv4", 00:27:46.986 "trsvcid": "4420", 00:27:46.986 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:46.986 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:46.986 "hdgst": false, 00:27:46.986 "ddgst": false 00:27:46.986 }, 00:27:46.986 "method": "bdev_nvme_attach_controller" 00:27:46.986 },{ 00:27:46.986 "params": { 00:27:46.986 "name": "Nvme5", 00:27:46.986 "trtype": "tcp", 00:27:46.986 "traddr": "10.0.0.2", 00:27:46.986 "adrfam": "ipv4", 00:27:46.986 "trsvcid": "4420", 00:27:46.986 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:46.986 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:46.986 "hdgst": false, 00:27:46.986 "ddgst": false 00:27:46.986 }, 00:27:46.986 "method": "bdev_nvme_attach_controller" 00:27:46.986 },{ 00:27:46.986 "params": { 00:27:46.986 "name": "Nvme6", 00:27:46.986 "trtype": "tcp", 00:27:46.986 "traddr": "10.0.0.2", 00:27:46.986 "adrfam": "ipv4", 00:27:46.986 "trsvcid": "4420", 00:27:46.986 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:46.986 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:46.986 "hdgst": false, 00:27:46.986 "ddgst": false 00:27:46.986 }, 00:27:46.986 "method": "bdev_nvme_attach_controller" 00:27:46.986 },{ 00:27:46.986 "params": { 00:27:46.986 "name": "Nvme7", 00:27:46.986 "trtype": "tcp", 00:27:46.986 "traddr": "10.0.0.2", 00:27:46.986 "adrfam": "ipv4", 00:27:46.986 "trsvcid": "4420", 00:27:46.986 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:46.986 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:46.986 "hdgst": false, 00:27:46.986 "ddgst": false 00:27:46.986 }, 00:27:46.986 "method": "bdev_nvme_attach_controller" 00:27:46.986 },{ 00:27:46.986 "params": { 00:27:46.986 "name": "Nvme8", 00:27:46.986 "trtype": "tcp", 00:27:46.986 "traddr": "10.0.0.2", 00:27:46.986 "adrfam": "ipv4", 00:27:46.986 "trsvcid": "4420", 00:27:46.986 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:46.986 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:46.986 "hdgst": false, 00:27:46.986 "ddgst": false 00:27:46.986 }, 00:27:46.986 "method": "bdev_nvme_attach_controller" 00:27:46.986 },{ 00:27:46.986 "params": { 00:27:46.986 "name": "Nvme9", 00:27:46.986 "trtype": "tcp", 00:27:46.986 "traddr": "10.0.0.2", 00:27:46.986 "adrfam": "ipv4", 00:27:46.986 "trsvcid": "4420", 00:27:46.986 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:46.986 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:46.986 "hdgst": false, 00:27:46.986 "ddgst": false 00:27:46.986 }, 00:27:46.986 "method": "bdev_nvme_attach_controller" 00:27:46.986 },{ 00:27:46.986 "params": { 00:27:46.986 "name": "Nvme10", 00:27:46.986 "trtype": "tcp", 00:27:46.986 "traddr": "10.0.0.2", 00:27:46.986 "adrfam": "ipv4", 00:27:46.987 "trsvcid": "4420", 00:27:46.987 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:46.987 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:46.987 "hdgst": false, 00:27:46.987 "ddgst": false 00:27:46.987 }, 00:27:46.987 "method": "bdev_nvme_attach_controller" 00:27:46.987 }' 00:27:46.987 [2024-11-25 14:25:51.947543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.987 [2024-11-25 14:25:51.983523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.370 Running I/O for 1 seconds... 00:27:49.312 1863.00 IOPS, 116.44 MiB/s 00:27:49.312 Latency(us) 00:27:49.312 [2024-11-25T13:25:54.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:49.313 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.313 Verification LBA range: start 0x0 length 0x400 00:27:49.313 Nvme1n1 : 1.15 222.91 13.93 0.00 0.00 284125.87 16820.91 251658.24 00:27:49.313 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.313 Verification LBA range: start 0x0 length 0x400 00:27:49.313 Nvme2n1 : 1.14 224.58 14.04 0.00 0.00 277255.04 20097.71 249910.61 00:27:49.313 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.313 Verification LBA range: start 0x0 length 0x400 00:27:49.313 Nvme3n1 : 1.13 226.90 14.18 0.00 0.00 269183.36 16820.91 253405.87 00:27:49.313 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.313 Verification LBA range: start 0x0 length 0x400 00:27:49.313 Nvme4n1 : 1.15 226.01 14.13 0.00 0.00 264107.14 8519.68 255153.49 00:27:49.313 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.313 Verification LBA range: start 0x0 length 0x400 00:27:49.313 Nvme5n1 : 1.13 235.70 14.73 0.00 0.00 243748.82 6198.61 225443.84 00:27:49.313 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.313 Verification LBA range: start 0x0 length 0x400 00:27:49.313 Nvme6n1 : 1.10 232.75 14.55 0.00 0.00 247749.55 20097.71 249910.61 00:27:49.313 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.313 Verification LBA range: start 0x0 length 0x400 00:27:49.313 Nvme7n1 : 1.17 272.53 17.03 0.00 0.00 205552.38 12342.61 270882.13 00:27:49.313 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.313 Verification LBA range: start 0x0 length 0x400 00:27:49.313 Nvme8n1 : 1.19 269.45 16.84 0.00 0.00 208046.08 15510.19 244667.73 00:27:49.313 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.313 Verification LBA range: start 0x0 length 0x400 00:27:49.313 Nvme9n1 : 1.18 220.91 13.81 0.00 0.00 248465.75 907.95 272629.76 00:27:49.313 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.313 Verification LBA range: start 0x0 length 0x400 00:27:49.313 Nvme10n1 : 1.20 267.43 16.71 0.00 0.00 202180.01 7700.48 255153.49 00:27:49.313 [2024-11-25T13:25:54.403Z] =================================================================================================================== 00:27:49.313 [2024-11-25T13:25:54.403Z] Total : 2399.16 149.95 0.00 0.00 242314.50 907.95 272629.76 00:27:49.574 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:27:49.574 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:49.574 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:49.574 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:49.574 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:49.574 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:49.574 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:27:49.574 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:49.574 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:27:49.574 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:49.574 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:49.574 rmmod nvme_tcp 00:27:49.574 rmmod nvme_fabrics 00:27:49.574 rmmod nvme_keyring 00:27:49.574 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:49.574 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:27:49.574 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:27:49.574 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3498645 ']' 00:27:49.574 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3498645 00:27:49.574 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3498645 ']' 00:27:49.574 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3498645 00:27:49.574 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:27:49.575 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:49.575 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3498645 00:27:49.575 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:49.575 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:49.575 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3498645' 00:27:49.575 killing process with pid 3498645 00:27:49.575 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3498645 00:27:49.575 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3498645 00:27:49.836 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:49.836 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:49.836 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:49.836 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:27:49.836 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:27:49.836 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:49.836 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:27:49.836 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:49.836 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:49.836 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.836 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:49.836 14:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.383 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:52.383 00:27:52.383 real 0m16.769s 00:27:52.383 user 0m33.358s 00:27:52.383 sys 0m6.949s 00:27:52.383 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:52.383 14:25:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:52.383 ************************************ 00:27:52.383 END TEST nvmf_shutdown_tc1 00:27:52.383 ************************************ 00:27:52.383 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:52.383 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:52.383 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:52.383 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:52.383 ************************************ 00:27:52.383 START TEST nvmf_shutdown_tc2 00:27:52.383 ************************************ 00:27:52.383 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:27:52.383 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:27:52.383 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:52.383 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:52.383 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:52.383 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:52.383 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:52.383 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:52.384 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:52.384 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:52.384 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:52.384 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:52.384 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:52.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:52.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:27:52.385 00:27:52.385 --- 10.0.0.2 ping statistics --- 00:27:52.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.385 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:52.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:52.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:27:52.385 00:27:52.385 --- 10.0.0.1 ping statistics --- 00:27:52.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.385 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3500630 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3500630 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3500630 ']' 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:52.385 14:25:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:52.646 [2024-11-25 14:25:57.502789] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:27:52.646 [2024-11-25 14:25:57.502887] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.646 [2024-11-25 14:25:57.602128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:52.646 [2024-11-25 14:25:57.641221] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:52.646 [2024-11-25 14:25:57.641260] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:52.646 [2024-11-25 14:25:57.641267] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:52.646 [2024-11-25 14:25:57.641272] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:52.646 [2024-11-25 14:25:57.641276] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:52.646 [2024-11-25 14:25:57.642739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:52.646 [2024-11-25 14:25:57.642897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:52.646 [2024-11-25 14:25:57.643051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.646 [2024-11-25 14:25:57.643053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:53.218 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:53.218 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:53.218 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:53.218 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:53.218 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.479 [2024-11-25 14:25:58.351534] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.479 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.479 Malloc1 00:27:53.479 [2024-11-25 14:25:58.463057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:53.479 Malloc2 00:27:53.479 Malloc3 00:27:53.479 Malloc4 00:27:53.740 Malloc5 00:27:53.740 Malloc6 00:27:53.740 Malloc7 00:27:53.740 Malloc8 00:27:53.740 Malloc9 00:27:53.740 Malloc10 00:27:53.740 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.740 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:53.740 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:53.740 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:54.003 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3500902 00:27:54.003 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3500902 /var/tmp/bdevperf.sock 00:27:54.003 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3500902 ']' 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:54.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:54.004 { 00:27:54.004 "params": { 00:27:54.004 "name": "Nvme$subsystem", 00:27:54.004 "trtype": "$TEST_TRANSPORT", 00:27:54.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.004 "adrfam": "ipv4", 00:27:54.004 "trsvcid": "$NVMF_PORT", 00:27:54.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.004 "hdgst": ${hdgst:-false}, 00:27:54.004 "ddgst": ${ddgst:-false} 00:27:54.004 }, 00:27:54.004 "method": "bdev_nvme_attach_controller" 00:27:54.004 } 00:27:54.004 EOF 00:27:54.004 )") 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:54.004 { 00:27:54.004 "params": { 00:27:54.004 "name": "Nvme$subsystem", 00:27:54.004 "trtype": "$TEST_TRANSPORT", 00:27:54.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.004 "adrfam": "ipv4", 00:27:54.004 "trsvcid": "$NVMF_PORT", 00:27:54.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.004 "hdgst": ${hdgst:-false}, 00:27:54.004 "ddgst": ${ddgst:-false} 00:27:54.004 }, 00:27:54.004 "method": "bdev_nvme_attach_controller" 00:27:54.004 } 00:27:54.004 EOF 00:27:54.004 )") 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:54.004 { 00:27:54.004 "params": { 00:27:54.004 "name": "Nvme$subsystem", 00:27:54.004 "trtype": "$TEST_TRANSPORT", 00:27:54.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.004 "adrfam": "ipv4", 00:27:54.004 "trsvcid": "$NVMF_PORT", 00:27:54.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.004 "hdgst": ${hdgst:-false}, 00:27:54.004 "ddgst": ${ddgst:-false} 00:27:54.004 }, 00:27:54.004 "method": "bdev_nvme_attach_controller" 00:27:54.004 } 00:27:54.004 EOF 00:27:54.004 )") 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:54.004 { 00:27:54.004 "params": { 00:27:54.004 "name": "Nvme$subsystem", 00:27:54.004 "trtype": "$TEST_TRANSPORT", 00:27:54.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.004 "adrfam": "ipv4", 00:27:54.004 "trsvcid": "$NVMF_PORT", 00:27:54.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.004 "hdgst": ${hdgst:-false}, 00:27:54.004 "ddgst": ${ddgst:-false} 00:27:54.004 }, 00:27:54.004 "method": "bdev_nvme_attach_controller" 00:27:54.004 } 00:27:54.004 EOF 00:27:54.004 )") 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:54.004 { 00:27:54.004 "params": { 00:27:54.004 "name": "Nvme$subsystem", 00:27:54.004 "trtype": "$TEST_TRANSPORT", 00:27:54.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.004 "adrfam": "ipv4", 00:27:54.004 "trsvcid": "$NVMF_PORT", 00:27:54.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.004 "hdgst": ${hdgst:-false}, 00:27:54.004 "ddgst": ${ddgst:-false} 00:27:54.004 }, 00:27:54.004 "method": "bdev_nvme_attach_controller" 00:27:54.004 } 00:27:54.004 EOF 00:27:54.004 )") 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:54.004 { 00:27:54.004 "params": { 00:27:54.004 "name": "Nvme$subsystem", 00:27:54.004 "trtype": "$TEST_TRANSPORT", 00:27:54.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.004 "adrfam": "ipv4", 00:27:54.004 "trsvcid": "$NVMF_PORT", 00:27:54.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.004 "hdgst": ${hdgst:-false}, 00:27:54.004 "ddgst": ${ddgst:-false} 00:27:54.004 }, 00:27:54.004 "method": "bdev_nvme_attach_controller" 00:27:54.004 } 00:27:54.004 EOF 00:27:54.004 )") 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:54.004 [2024-11-25 14:25:58.905376] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:27:54.004 [2024-11-25 14:25:58.905431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3500902 ] 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:54.004 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:54.004 { 00:27:54.004 "params": { 00:27:54.004 "name": "Nvme$subsystem", 00:27:54.004 "trtype": "$TEST_TRANSPORT", 00:27:54.004 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.004 "adrfam": "ipv4", 00:27:54.004 "trsvcid": "$NVMF_PORT", 00:27:54.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.004 "hdgst": ${hdgst:-false}, 00:27:54.004 "ddgst": ${ddgst:-false} 00:27:54.004 }, 00:27:54.004 "method": "bdev_nvme_attach_controller" 00:27:54.004 } 00:27:54.004 EOF 00:27:54.004 )") 00:27:54.005 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:54.005 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:54.005 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:54.005 { 00:27:54.005 "params": { 00:27:54.005 "name": "Nvme$subsystem", 00:27:54.005 "trtype": "$TEST_TRANSPORT", 00:27:54.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.005 "adrfam": "ipv4", 00:27:54.005 "trsvcid": "$NVMF_PORT", 00:27:54.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.005 "hdgst": ${hdgst:-false}, 00:27:54.005 "ddgst": ${ddgst:-false} 00:27:54.005 }, 00:27:54.005 "method": "bdev_nvme_attach_controller" 00:27:54.005 } 00:27:54.005 EOF 00:27:54.005 )") 00:27:54.005 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:54.005 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:54.005 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:54.005 { 00:27:54.005 "params": { 00:27:54.005 "name": "Nvme$subsystem", 00:27:54.005 "trtype": "$TEST_TRANSPORT", 00:27:54.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.005 "adrfam": "ipv4", 00:27:54.005 "trsvcid": "$NVMF_PORT", 00:27:54.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.005 "hdgst": ${hdgst:-false}, 00:27:54.005 "ddgst": ${ddgst:-false} 00:27:54.005 }, 00:27:54.005 "method": "bdev_nvme_attach_controller" 00:27:54.005 } 00:27:54.005 EOF 00:27:54.005 )") 00:27:54.005 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:54.005 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:54.005 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:54.005 { 00:27:54.005 "params": { 00:27:54.005 "name": "Nvme$subsystem", 00:27:54.005 "trtype": "$TEST_TRANSPORT", 00:27:54.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:54.005 "adrfam": "ipv4", 00:27:54.005 "trsvcid": "$NVMF_PORT", 00:27:54.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:54.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:54.005 "hdgst": ${hdgst:-false}, 00:27:54.005 "ddgst": ${ddgst:-false} 00:27:54.005 }, 00:27:54.005 "method": "bdev_nvme_attach_controller" 00:27:54.005 } 00:27:54.005 EOF 00:27:54.005 )") 00:27:54.005 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:54.005 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:27:54.005 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:27:54.005 14:25:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:54.005 "params": { 00:27:54.005 "name": "Nvme1", 00:27:54.005 "trtype": "tcp", 00:27:54.005 "traddr": "10.0.0.2", 00:27:54.005 "adrfam": "ipv4", 00:27:54.005 "trsvcid": "4420", 00:27:54.005 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:54.005 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:54.005 "hdgst": false, 00:27:54.005 "ddgst": false 00:27:54.005 }, 00:27:54.005 "method": "bdev_nvme_attach_controller" 00:27:54.005 },{ 00:27:54.005 "params": { 00:27:54.005 "name": "Nvme2", 00:27:54.005 "trtype": "tcp", 00:27:54.005 "traddr": "10.0.0.2", 00:27:54.005 "adrfam": "ipv4", 00:27:54.005 "trsvcid": "4420", 00:27:54.005 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:54.005 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:54.005 "hdgst": false, 00:27:54.005 "ddgst": false 00:27:54.005 }, 00:27:54.005 "method": "bdev_nvme_attach_controller" 00:27:54.005 },{ 00:27:54.005 "params": { 00:27:54.005 "name": "Nvme3", 00:27:54.005 "trtype": "tcp", 00:27:54.005 "traddr": "10.0.0.2", 00:27:54.005 "adrfam": "ipv4", 00:27:54.005 "trsvcid": "4420", 00:27:54.005 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:54.005 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:54.005 "hdgst": false, 00:27:54.005 "ddgst": false 00:27:54.005 }, 00:27:54.005 "method": "bdev_nvme_attach_controller" 00:27:54.005 },{ 00:27:54.005 "params": { 00:27:54.005 "name": "Nvme4", 00:27:54.005 "trtype": "tcp", 00:27:54.005 "traddr": "10.0.0.2", 00:27:54.005 "adrfam": "ipv4", 00:27:54.005 "trsvcid": "4420", 00:27:54.005 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:54.005 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:54.005 "hdgst": false, 00:27:54.005 "ddgst": false 00:27:54.005 }, 00:27:54.005 "method": "bdev_nvme_attach_controller" 00:27:54.005 },{ 00:27:54.005 "params": { 00:27:54.005 "name": "Nvme5", 00:27:54.005 "trtype": "tcp", 00:27:54.005 "traddr": "10.0.0.2", 00:27:54.005 "adrfam": "ipv4", 00:27:54.005 "trsvcid": "4420", 00:27:54.005 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:54.005 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:54.005 "hdgst": false, 00:27:54.005 "ddgst": false 00:27:54.005 }, 00:27:54.005 "method": "bdev_nvme_attach_controller" 00:27:54.005 },{ 00:27:54.005 "params": { 00:27:54.005 "name": "Nvme6", 00:27:54.005 "trtype": "tcp", 00:27:54.005 "traddr": "10.0.0.2", 00:27:54.005 "adrfam": "ipv4", 00:27:54.005 "trsvcid": "4420", 00:27:54.005 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:54.005 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:54.005 "hdgst": false, 00:27:54.005 "ddgst": false 00:27:54.005 }, 00:27:54.005 "method": "bdev_nvme_attach_controller" 00:27:54.005 },{ 00:27:54.005 "params": { 00:27:54.005 "name": "Nvme7", 00:27:54.005 "trtype": "tcp", 00:27:54.005 "traddr": "10.0.0.2", 00:27:54.005 "adrfam": "ipv4", 00:27:54.005 "trsvcid": "4420", 00:27:54.005 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:54.005 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:54.005 "hdgst": false, 00:27:54.005 "ddgst": false 00:27:54.005 }, 00:27:54.005 "method": "bdev_nvme_attach_controller" 00:27:54.005 },{ 00:27:54.005 "params": { 00:27:54.005 "name": "Nvme8", 00:27:54.005 "trtype": "tcp", 00:27:54.005 "traddr": "10.0.0.2", 00:27:54.005 "adrfam": "ipv4", 00:27:54.005 "trsvcid": "4420", 00:27:54.005 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:54.005 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:54.005 "hdgst": false, 00:27:54.005 "ddgst": false 00:27:54.005 }, 00:27:54.005 "method": "bdev_nvme_attach_controller" 00:27:54.005 },{ 00:27:54.005 "params": { 00:27:54.005 "name": "Nvme9", 00:27:54.005 "trtype": "tcp", 00:27:54.005 "traddr": "10.0.0.2", 00:27:54.005 "adrfam": "ipv4", 00:27:54.005 "trsvcid": "4420", 00:27:54.005 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:54.005 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:54.005 "hdgst": false, 00:27:54.005 "ddgst": false 00:27:54.005 }, 00:27:54.005 "method": "bdev_nvme_attach_controller" 00:27:54.005 },{ 00:27:54.005 "params": { 00:27:54.005 "name": "Nvme10", 00:27:54.005 "trtype": "tcp", 00:27:54.005 "traddr": "10.0.0.2", 00:27:54.005 "adrfam": "ipv4", 00:27:54.005 "trsvcid": "4420", 00:27:54.005 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:54.005 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:54.005 "hdgst": false, 00:27:54.005 "ddgst": false 00:27:54.005 }, 00:27:54.005 "method": "bdev_nvme_attach_controller" 00:27:54.005 }' 00:27:54.005 [2024-11-25 14:25:58.994725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.005 [2024-11-25 14:25:59.031200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.390 Running I/O for 10 seconds... 00:27:55.390 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:55.390 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:55.390 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:55.390 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.390 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.390 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.390 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:55.390 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:55.390 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:27:55.390 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:27:55.390 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:27:55.390 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:27:55.390 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:55.390 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:55.390 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:55.390 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.390 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.651 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.651 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:27:55.651 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:27:55.651 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:55.912 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:55.912 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:55.912 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:55.912 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:55.912 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.912 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.912 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.912 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:27:55.912 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:27:55.912 14:26:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:56.172 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:56.172 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:56.172 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:56.172 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:56.172 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.172 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:56.172 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.172 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:27:56.172 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:27:56.172 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:27:56.172 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:27:56.172 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:27:56.172 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3500902 00:27:56.172 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3500902 ']' 00:27:56.172 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3500902 00:27:56.172 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:27:56.172 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:56.172 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3500902 00:27:56.172 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:56.172 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:56.172 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3500902' 00:27:56.172 killing process with pid 3500902 00:27:56.172 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3500902 00:27:56.172 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3500902 00:27:56.434 2302.00 IOPS, 143.88 MiB/s [2024-11-25T13:26:01.524Z] Received shutdown signal, test time was about 1.017798 seconds 00:27:56.434 00:27:56.434 Latency(us) 00:27:56.434 [2024-11-25T13:26:01.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.434 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.434 Verification LBA range: start 0x0 length 0x400 00:27:56.434 Nvme1n1 : 0.94 204.44 12.78 0.00 0.00 309461.33 20206.93 249910.61 00:27:56.434 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.434 Verification LBA range: start 0x0 length 0x400 00:27:56.434 Nvme2n1 : 1.02 251.74 15.73 0.00 0.00 234980.69 19005.44 249910.61 00:27:56.434 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.434 Verification LBA range: start 0x0 length 0x400 00:27:56.434 Nvme3n1 : 0.97 262.83 16.43 0.00 0.00 230925.87 21517.65 246415.36 00:27:56.434 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.434 Verification LBA range: start 0x0 length 0x400 00:27:56.434 Nvme4n1 : 0.97 263.59 16.47 0.00 0.00 225440.85 19770.03 227191.47 00:27:56.434 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.434 Verification LBA range: start 0x0 length 0x400 00:27:56.434 Nvme5n1 : 0.97 265.10 16.57 0.00 0.00 219243.52 22173.01 244667.73 00:27:56.434 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.434 Verification LBA range: start 0x0 length 0x400 00:27:56.434 Nvme6n1 : 0.95 203.09 12.69 0.00 0.00 278761.81 15400.96 253405.87 00:27:56.434 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.434 Verification LBA range: start 0x0 length 0x400 00:27:56.434 Nvme7n1 : 0.96 265.90 16.62 0.00 0.00 208824.11 17585.49 248162.99 00:27:56.434 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.434 Verification LBA range: start 0x0 length 0x400 00:27:56.434 Nvme8n1 : 0.98 262.07 16.38 0.00 0.00 207511.89 17257.81 242920.11 00:27:56.434 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.434 Verification LBA range: start 0x0 length 0x400 00:27:56.434 Nvme9n1 : 0.95 201.26 12.58 0.00 0.00 262551.32 16602.45 255153.49 00:27:56.434 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.434 Verification LBA range: start 0x0 length 0x400 00:27:56.434 Nvme10n1 : 0.96 200.22 12.51 0.00 0.00 258023.25 19333.12 270882.13 00:27:56.434 [2024-11-25T13:26:01.524Z] =================================================================================================================== 00:27:56.434 [2024-11-25T13:26:01.524Z] Total : 2380.23 148.76 0.00 0.00 239836.14 15400.96 270882.13 00:27:56.434 14:26:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:27:57.376 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3500630 00:27:57.376 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:27:57.376 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:57.376 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:57.376 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:57.376 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:57.376 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:57.376 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:27:57.376 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:57.376 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:27:57.376 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:57.376 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:57.376 rmmod nvme_tcp 00:27:57.638 rmmod nvme_fabrics 00:27:57.638 rmmod nvme_keyring 00:27:57.638 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:57.638 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:27:57.638 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:27:57.638 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3500630 ']' 00:27:57.638 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3500630 00:27:57.638 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3500630 ']' 00:27:57.638 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3500630 00:27:57.638 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:27:57.638 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:57.638 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3500630 00:27:57.638 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:57.638 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:57.638 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3500630' 00:27:57.638 killing process with pid 3500630 00:27:57.638 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3500630 00:27:57.638 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3500630 00:27:57.899 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:57.899 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:57.899 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:57.899 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:27:57.899 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:27:57.899 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:57.899 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:27:57.899 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:57.899 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:57.899 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.899 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:57.899 14:26:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.815 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:59.815 00:27:59.815 real 0m7.827s 00:27:59.815 user 0m23.386s 00:27:59.815 sys 0m1.313s 00:27:59.815 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:59.815 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.815 ************************************ 00:27:59.815 END TEST nvmf_shutdown_tc2 00:27:59.815 ************************************ 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:00.077 ************************************ 00:28:00.077 START TEST nvmf_shutdown_tc3 00:28:00.077 ************************************ 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:00.077 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:00.077 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:00.077 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:00.078 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:00.078 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:00.078 14:26:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:00.078 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:00.078 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:00.078 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:00.078 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:00.078 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:00.078 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:00.078 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:00.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:00.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:28:00.339 00:28:00.339 --- 10.0.0.2 ping statistics --- 00:28:00.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.339 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:00.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:00.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:28:00.339 00:28:00.339 --- 10.0.0.1 ping statistics --- 00:28:00.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.339 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3502356 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3502356 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3502356 ']' 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:00.339 14:26:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:00.339 [2024-11-25 14:26:05.390523] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:28:00.339 [2024-11-25 14:26:05.390600] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.599 [2024-11-25 14:26:05.483682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:00.599 [2024-11-25 14:26:05.515625] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:00.600 [2024-11-25 14:26:05.515658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:00.600 [2024-11-25 14:26:05.515664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:00.600 [2024-11-25 14:26:05.515669] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:00.600 [2024-11-25 14:26:05.515674] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:00.600 [2024-11-25 14:26:05.516975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:00.600 [2024-11-25 14:26:05.517131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:00.600 [2024-11-25 14:26:05.517246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:00.600 [2024-11-25 14:26:05.517400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.170 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:01.170 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:01.170 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:01.170 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:01.170 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:01.170 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:01.170 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:01.170 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.171 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:01.171 [2024-11-25 14:26:06.231140] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:01.171 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.171 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:01.171 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:01.171 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:01.171 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:01.171 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:01.171 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:01.171 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:01.171 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:01.171 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:01.171 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:01.171 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:01.432 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:01.432 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:01.432 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:01.432 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:01.432 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:01.432 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:01.432 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:01.432 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:01.432 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:01.432 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:01.432 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:01.432 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:01.432 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:01.432 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:01.432 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:01.432 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.432 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:01.432 Malloc1 00:28:01.432 [2024-11-25 14:26:06.337362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:01.432 Malloc2 00:28:01.432 Malloc3 00:28:01.432 Malloc4 00:28:01.432 Malloc5 00:28:01.432 Malloc6 00:28:01.693 Malloc7 00:28:01.693 Malloc8 00:28:01.693 Malloc9 00:28:01.693 Malloc10 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3502738 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3502738 /var/tmp/bdevperf.sock 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3502738 ']' 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:01.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:01.693 { 00:28:01.693 "params": { 00:28:01.693 "name": "Nvme$subsystem", 00:28:01.693 "trtype": "$TEST_TRANSPORT", 00:28:01.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.693 "adrfam": "ipv4", 00:28:01.693 "trsvcid": "$NVMF_PORT", 00:28:01.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.693 "hdgst": ${hdgst:-false}, 00:28:01.693 "ddgst": ${ddgst:-false} 00:28:01.693 }, 00:28:01.693 "method": "bdev_nvme_attach_controller" 00:28:01.693 } 00:28:01.693 EOF 00:28:01.693 )") 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:01.693 { 00:28:01.693 "params": { 00:28:01.693 "name": "Nvme$subsystem", 00:28:01.693 "trtype": "$TEST_TRANSPORT", 00:28:01.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.693 "adrfam": "ipv4", 00:28:01.693 "trsvcid": "$NVMF_PORT", 00:28:01.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.693 "hdgst": ${hdgst:-false}, 00:28:01.693 "ddgst": ${ddgst:-false} 00:28:01.693 }, 00:28:01.693 "method": "bdev_nvme_attach_controller" 00:28:01.693 } 00:28:01.693 EOF 00:28:01.693 )") 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:01.693 { 00:28:01.693 "params": { 00:28:01.693 "name": "Nvme$subsystem", 00:28:01.693 "trtype": "$TEST_TRANSPORT", 00:28:01.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.693 "adrfam": "ipv4", 00:28:01.693 "trsvcid": "$NVMF_PORT", 00:28:01.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.693 "hdgst": ${hdgst:-false}, 00:28:01.693 "ddgst": ${ddgst:-false} 00:28:01.693 }, 00:28:01.693 "method": "bdev_nvme_attach_controller" 00:28:01.693 } 00:28:01.693 EOF 00:28:01.693 )") 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:01.693 { 00:28:01.693 "params": { 00:28:01.693 "name": "Nvme$subsystem", 00:28:01.693 "trtype": "$TEST_TRANSPORT", 00:28:01.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.693 "adrfam": "ipv4", 00:28:01.693 "trsvcid": "$NVMF_PORT", 00:28:01.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.693 "hdgst": ${hdgst:-false}, 00:28:01.693 "ddgst": ${ddgst:-false} 00:28:01.693 }, 00:28:01.693 "method": "bdev_nvme_attach_controller" 00:28:01.693 } 00:28:01.693 EOF 00:28:01.693 )") 00:28:01.693 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:01.694 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:01.694 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:01.694 { 00:28:01.694 "params": { 00:28:01.694 "name": "Nvme$subsystem", 00:28:01.694 "trtype": "$TEST_TRANSPORT", 00:28:01.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.694 "adrfam": "ipv4", 00:28:01.694 "trsvcid": "$NVMF_PORT", 00:28:01.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.694 "hdgst": ${hdgst:-false}, 00:28:01.694 "ddgst": ${ddgst:-false} 00:28:01.694 }, 00:28:01.694 "method": "bdev_nvme_attach_controller" 00:28:01.694 } 00:28:01.694 EOF 00:28:01.694 )") 00:28:01.694 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:01.694 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:01.694 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:01.694 { 00:28:01.694 "params": { 00:28:01.694 "name": "Nvme$subsystem", 00:28:01.694 "trtype": "$TEST_TRANSPORT", 00:28:01.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.694 "adrfam": "ipv4", 00:28:01.694 "trsvcid": "$NVMF_PORT", 00:28:01.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.694 "hdgst": ${hdgst:-false}, 00:28:01.694 "ddgst": ${ddgst:-false} 00:28:01.694 }, 00:28:01.694 "method": "bdev_nvme_attach_controller" 00:28:01.694 } 00:28:01.694 EOF 00:28:01.694 )") 00:28:01.694 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:01.954 [2024-11-25 14:26:06.786048] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:28:01.954 [2024-11-25 14:26:06.786103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3502738 ] 00:28:01.954 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:01.954 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:01.954 { 00:28:01.954 "params": { 00:28:01.954 "name": "Nvme$subsystem", 00:28:01.954 "trtype": "$TEST_TRANSPORT", 00:28:01.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.954 "adrfam": "ipv4", 00:28:01.954 "trsvcid": "$NVMF_PORT", 00:28:01.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.954 "hdgst": ${hdgst:-false}, 00:28:01.954 "ddgst": ${ddgst:-false} 00:28:01.954 }, 00:28:01.954 "method": "bdev_nvme_attach_controller" 00:28:01.954 } 00:28:01.954 EOF 00:28:01.954 )") 00:28:01.954 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:01.954 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:01.954 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:01.954 { 00:28:01.954 "params": { 00:28:01.954 "name": "Nvme$subsystem", 00:28:01.954 "trtype": "$TEST_TRANSPORT", 00:28:01.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.954 "adrfam": "ipv4", 00:28:01.954 "trsvcid": "$NVMF_PORT", 00:28:01.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.954 "hdgst": ${hdgst:-false}, 00:28:01.954 "ddgst": ${ddgst:-false} 00:28:01.954 }, 00:28:01.954 "method": "bdev_nvme_attach_controller" 00:28:01.954 } 00:28:01.954 EOF 00:28:01.954 )") 00:28:01.954 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:01.954 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:01.954 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:01.954 { 00:28:01.954 "params": { 00:28:01.954 "name": "Nvme$subsystem", 00:28:01.954 "trtype": "$TEST_TRANSPORT", 00:28:01.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.954 "adrfam": "ipv4", 00:28:01.954 "trsvcid": "$NVMF_PORT", 00:28:01.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.954 "hdgst": ${hdgst:-false}, 00:28:01.954 "ddgst": ${ddgst:-false} 00:28:01.954 }, 00:28:01.954 "method": "bdev_nvme_attach_controller" 00:28:01.954 } 00:28:01.954 EOF 00:28:01.954 )") 00:28:01.955 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:01.955 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:01.955 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:01.955 { 00:28:01.955 "params": { 00:28:01.955 "name": "Nvme$subsystem", 00:28:01.955 "trtype": "$TEST_TRANSPORT", 00:28:01.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:01.955 "adrfam": "ipv4", 00:28:01.955 "trsvcid": "$NVMF_PORT", 00:28:01.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:01.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:01.955 "hdgst": ${hdgst:-false}, 00:28:01.955 "ddgst": ${ddgst:-false} 00:28:01.955 }, 00:28:01.955 "method": "bdev_nvme_attach_controller" 00:28:01.955 } 00:28:01.955 EOF 00:28:01.955 )") 00:28:01.955 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:01.955 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:28:01.955 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:28:01.955 14:26:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:01.955 "params": { 00:28:01.955 "name": "Nvme1", 00:28:01.955 "trtype": "tcp", 00:28:01.955 "traddr": "10.0.0.2", 00:28:01.955 "adrfam": "ipv4", 00:28:01.955 "trsvcid": "4420", 00:28:01.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:01.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:01.955 "hdgst": false, 00:28:01.955 "ddgst": false 00:28:01.955 }, 00:28:01.955 "method": "bdev_nvme_attach_controller" 00:28:01.955 },{ 00:28:01.955 "params": { 00:28:01.955 "name": "Nvme2", 00:28:01.955 "trtype": "tcp", 00:28:01.955 "traddr": "10.0.0.2", 00:28:01.955 "adrfam": "ipv4", 00:28:01.955 "trsvcid": "4420", 00:28:01.955 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:01.955 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:01.955 "hdgst": false, 00:28:01.955 "ddgst": false 00:28:01.955 }, 00:28:01.955 "method": "bdev_nvme_attach_controller" 00:28:01.955 },{ 00:28:01.955 "params": { 00:28:01.955 "name": "Nvme3", 00:28:01.955 "trtype": "tcp", 00:28:01.955 "traddr": "10.0.0.2", 00:28:01.955 "adrfam": "ipv4", 00:28:01.955 "trsvcid": "4420", 00:28:01.955 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:01.955 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:01.955 "hdgst": false, 00:28:01.955 "ddgst": false 00:28:01.955 }, 00:28:01.955 "method": "bdev_nvme_attach_controller" 00:28:01.955 },{ 00:28:01.955 "params": { 00:28:01.955 "name": "Nvme4", 00:28:01.955 "trtype": "tcp", 00:28:01.955 "traddr": "10.0.0.2", 00:28:01.955 "adrfam": "ipv4", 00:28:01.955 "trsvcid": "4420", 00:28:01.955 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:01.955 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:01.955 "hdgst": false, 00:28:01.955 "ddgst": false 00:28:01.955 }, 00:28:01.955 "method": "bdev_nvme_attach_controller" 00:28:01.955 },{ 00:28:01.955 "params": { 00:28:01.955 "name": "Nvme5", 00:28:01.955 "trtype": "tcp", 00:28:01.955 "traddr": "10.0.0.2", 00:28:01.955 "adrfam": "ipv4", 00:28:01.955 "trsvcid": "4420", 00:28:01.955 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:01.955 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:01.955 "hdgst": false, 00:28:01.955 "ddgst": false 00:28:01.955 }, 00:28:01.955 "method": "bdev_nvme_attach_controller" 00:28:01.955 },{ 00:28:01.955 "params": { 00:28:01.955 "name": "Nvme6", 00:28:01.955 "trtype": "tcp", 00:28:01.955 "traddr": "10.0.0.2", 00:28:01.955 "adrfam": "ipv4", 00:28:01.955 "trsvcid": "4420", 00:28:01.955 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:01.955 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:01.955 "hdgst": false, 00:28:01.955 "ddgst": false 00:28:01.955 }, 00:28:01.955 "method": "bdev_nvme_attach_controller" 00:28:01.955 },{ 00:28:01.955 "params": { 00:28:01.955 "name": "Nvme7", 00:28:01.955 "trtype": "tcp", 00:28:01.955 "traddr": "10.0.0.2", 00:28:01.955 "adrfam": "ipv4", 00:28:01.955 "trsvcid": "4420", 00:28:01.955 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:01.955 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:01.955 "hdgst": false, 00:28:01.955 "ddgst": false 00:28:01.955 }, 00:28:01.955 "method": "bdev_nvme_attach_controller" 00:28:01.955 },{ 00:28:01.955 "params": { 00:28:01.955 "name": "Nvme8", 00:28:01.955 "trtype": "tcp", 00:28:01.955 "traddr": "10.0.0.2", 00:28:01.955 "adrfam": "ipv4", 00:28:01.955 "trsvcid": "4420", 00:28:01.955 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:01.955 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:01.955 "hdgst": false, 00:28:01.955 "ddgst": false 00:28:01.955 }, 00:28:01.955 "method": "bdev_nvme_attach_controller" 00:28:01.955 },{ 00:28:01.955 "params": { 00:28:01.955 "name": "Nvme9", 00:28:01.955 "trtype": "tcp", 00:28:01.955 "traddr": "10.0.0.2", 00:28:01.955 "adrfam": "ipv4", 00:28:01.955 "trsvcid": "4420", 00:28:01.955 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:01.955 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:01.955 "hdgst": false, 00:28:01.955 "ddgst": false 00:28:01.955 }, 00:28:01.955 "method": "bdev_nvme_attach_controller" 00:28:01.955 },{ 00:28:01.955 "params": { 00:28:01.955 "name": "Nvme10", 00:28:01.955 "trtype": "tcp", 00:28:01.955 "traddr": "10.0.0.2", 00:28:01.955 "adrfam": "ipv4", 00:28:01.955 "trsvcid": "4420", 00:28:01.955 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:01.955 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:01.955 "hdgst": false, 00:28:01.955 "ddgst": false 00:28:01.955 }, 00:28:01.955 "method": "bdev_nvme_attach_controller" 00:28:01.955 }' 00:28:01.955 [2024-11-25 14:26:06.876245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.955 [2024-11-25 14:26:06.912765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.338 Running I/O for 10 seconds... 00:28:03.338 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:03.338 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:03.338 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:03.338 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.338 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:03.598 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.599 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:03.599 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:03.599 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:03.599 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:03.599 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:28:03.599 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:28:03.599 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:03.599 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:03.599 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:03.599 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:03.599 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.599 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:03.599 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.599 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:28:03.599 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:28:03.599 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:03.859 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:03.859 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:03.859 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:03.859 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:03.859 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.859 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:03.859 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.859 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:03.859 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:03.859 14:26:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:04.119 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:04.119 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:04.119 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:04.119 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:04.119 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.119 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:04.394 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.394 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=146 00:28:04.394 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 146 -ge 100 ']' 00:28:04.394 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:28:04.394 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:28:04.394 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:28:04.394 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3502356 00:28:04.394 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3502356 ']' 00:28:04.394 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3502356 00:28:04.394 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:28:04.394 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:04.394 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3502356 00:28:04.394 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:04.394 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:04.394 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3502356' 00:28:04.394 killing process with pid 3502356 00:28:04.394 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3502356 00:28:04.394 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3502356 00:28:04.394 [2024-11-25 14:26:09.289672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.394 [2024-11-25 14:26:09.289746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.394 [2024-11-25 14:26:09.289753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.394 [2024-11-25 14:26:09.289758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.394 [2024-11-25 14:26:09.289763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.394 [2024-11-25 14:26:09.289768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.394 [2024-11-25 14:26:09.289773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.394 [2024-11-25 14:26:09.289777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.394 [2024-11-25 14:26:09.289782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.394 [2024-11-25 14:26:09.289786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.394 [2024-11-25 14:26:09.289791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.394 [2024-11-25 14:26:09.289796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.394 [2024-11-25 14:26:09.289800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.394 [2024-11-25 14:26:09.289805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.289995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.290001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.290006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.290011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.290015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.290020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.290025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.290030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.290034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.290039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.290044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7980 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.290906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.290939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.290945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.290950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.290955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.290960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.290965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.290969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.290974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.290979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.290984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.290989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.290993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.290999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.291004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.291009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.291014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.291018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.291026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.291030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.291035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.291040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.291046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.291050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.291055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.291059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.291064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.291068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.291073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.291078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.291082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.291086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.291092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.291097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.291102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.395 [2024-11-25 14:26:09.291106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7259b0 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.291901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f7e50 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.292997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.293001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.293006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.293011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.293016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.293020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.293026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.396 [2024-11-25 14:26:09.293030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.293035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.293039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.293044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.293048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.293052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f8810 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.294579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9680 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.294606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9680 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.294613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9680 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.294618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9680 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6f9b70 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.397 [2024-11-25 14:26:09.295867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.295872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.295881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.295886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.295891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.295896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.295900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.295905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.295910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.295914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.295919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.295923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.295928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.295933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.295938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.295943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.295947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.295952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.295957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.295962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.295967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.295971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.295976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.295981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.295986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.295991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.295996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.296000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.296005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.296010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.296015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.296019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.296024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.296028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.296034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.296039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.296043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.296049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.296053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.296058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.296062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.296067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.296071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.296076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.296081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.296086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.296090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.296096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.296100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.296105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7254e0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.309388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.398 [2024-11-25 14:26:09.309433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.398 [2024-11-25 14:26:09.309445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.398 [2024-11-25 14:26:09.309454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.398 [2024-11-25 14:26:09.309462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.398 [2024-11-25 14:26:09.309470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.398 [2024-11-25 14:26:09.309483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.398 [2024-11-25 14:26:09.309491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.398 [2024-11-25 14:26:09.309499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2fd0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.309535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.398 [2024-11-25 14:26:09.309545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.398 [2024-11-25 14:26:09.309554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.398 [2024-11-25 14:26:09.309561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.398 [2024-11-25 14:26:09.309571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.398 [2024-11-25 14:26:09.309579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.398 [2024-11-25 14:26:09.309588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.398 [2024-11-25 14:26:09.309596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.398 [2024-11-25 14:26:09.309603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfdb0 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.309632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.398 [2024-11-25 14:26:09.309642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.398 [2024-11-25 14:26:09.309650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.398 [2024-11-25 14:26:09.309660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.398 [2024-11-25 14:26:09.309669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.398 [2024-11-25 14:26:09.309678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.398 [2024-11-25 14:26:09.309687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.398 [2024-11-25 14:26:09.309696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.398 [2024-11-25 14:26:09.309704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1396790 is same with the state(6) to be set 00:28:04.398 [2024-11-25 14:26:09.309733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.398 [2024-11-25 14:26:09.309743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.398 [2024-11-25 14:26:09.309753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.398 [2024-11-25 14:26:09.309761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.398 [2024-11-25 14:26:09.309769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.398 [2024-11-25 14:26:09.309779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.398 [2024-11-25 14:26:09.309787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.399 [2024-11-25 14:26:09.309795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.309803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf71d30 is same with the state(6) to be set 00:28:04.399 [2024-11-25 14:26:09.309827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.399 [2024-11-25 14:26:09.309836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.309845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.399 [2024-11-25 14:26:09.309852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.309860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.399 [2024-11-25 14:26:09.309868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.309876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.399 [2024-11-25 14:26:09.309883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.309890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf68990 is same with the state(6) to be set 00:28:04.399 [2024-11-25 14:26:09.309914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.399 [2024-11-25 14:26:09.309924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.309933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.399 [2024-11-25 14:26:09.309940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.309948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.399 [2024-11-25 14:26:09.309956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.309964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.399 [2024-11-25 14:26:09.309973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.309980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67920 is same with the state(6) to be set 00:28:04.399 [2024-11-25 14:26:09.310003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.399 [2024-11-25 14:26:09.310012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.310021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.399 [2024-11-25 14:26:09.310035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.310044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.399 [2024-11-25 14:26:09.310051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.310060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.399 [2024-11-25 14:26:09.310068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.310075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ebc90 is same with the state(6) to be set 00:28:04.399 [2024-11-25 14:26:09.310101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.399 [2024-11-25 14:26:09.310110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.310119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.399 [2024-11-25 14:26:09.310126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.310134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.399 [2024-11-25 14:26:09.310142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.310150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.399 [2024-11-25 14:26:09.310164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.310172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe89610 is same with the state(6) to be set 00:28:04.399 [2024-11-25 14:26:09.310198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.399 [2024-11-25 14:26:09.310208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.310216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.399 [2024-11-25 14:26:09.310225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.310233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.399 [2024-11-25 14:26:09.310240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.310249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.399 [2024-11-25 14:26:09.310256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.310263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70040 is same with the state(6) to be set 00:28:04.399 [2024-11-25 14:26:09.310287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.399 [2024-11-25 14:26:09.310297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.310308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.399 [2024-11-25 14:26:09.310315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.310323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.399 [2024-11-25 14:26:09.310330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.310339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.399 [2024-11-25 14:26:09.310346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.310353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ce0 is same with the state(6) to be set 00:28:04.399 [2024-11-25 14:26:09.331878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.399 [2024-11-25 14:26:09.331914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.331933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.399 [2024-11-25 14:26:09.331941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.331951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.399 [2024-11-25 14:26:09.331959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.331969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.399 [2024-11-25 14:26:09.331977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.331986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.399 [2024-11-25 14:26:09.331993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.332003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.399 [2024-11-25 14:26:09.332011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.399 [2024-11-25 14:26:09.332020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.400 [2024-11-25 14:26:09.332713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.400 [2024-11-25 14:26:09.332722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.332731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.332741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.332749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.332758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.332765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.332775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.332782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.332791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.332799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.332808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.332815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.332825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.332833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.332842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.332849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.332859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.332866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.332876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.332883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.332892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.332900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.332909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.332917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.332926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.332933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.332944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.332952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.332961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.332969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.332978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.332985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.332995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.333002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.333011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.333019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.333048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:04.401 [2024-11-25 14:26:09.333129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.333140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.333153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.333166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.333178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.333185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.333195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.333202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.333212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.333221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.333231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.333238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.333248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.333257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.333269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.333278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.333288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.333295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.333305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.333313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.333323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.333331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.333341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.333349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.333358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.333366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.333375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.333383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.333393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.333401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.333411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.333418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.333428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.333436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.333445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.333453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.333463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.333470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.333480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.333490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.333500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.333507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.401 [2024-11-25 14:26:09.333517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.401 [2024-11-25 14:26:09.333524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.333983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.333990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.334000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.334007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.334017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.334025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.334034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.334042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.334051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.334059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.334068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.334076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.334085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.334092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.334102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.334111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.334121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.334128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.334140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.334147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.334157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.334169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.334178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.334186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.334195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.334203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.402 [2024-11-25 14:26:09.334212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.402 [2024-11-25 14:26:09.334221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e22110 is same with the state(6) to be set 00:28:04.403 [2024-11-25 14:26:09.334327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.403 [2024-11-25 14:26:09.334898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.403 [2024-11-25 14:26:09.334908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.334918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.334927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.334934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.334944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.334954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.334964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.334971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.344877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.344921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.344932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.344941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.344951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.344958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.344968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.344976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.344985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.344992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.345002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.345020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.345030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.345037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.345046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.345054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.345063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.345071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.345080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.345088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.345099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.345106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.345116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.345123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.345132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.345140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.345150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.345157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.345178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.345186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.345196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.345203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.345213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.345220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.345230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.345237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.345249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.345256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.345266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.345274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.345283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.345291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.345300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.345308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.345318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.345325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.345335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.345342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.345351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.345359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.345368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.345376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.345385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.345392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.345733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c2fd0 (9): Bad file descriptor 00:28:04.404 [2024-11-25 14:26:09.345762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13cfdb0 (9): Bad file descriptor 00:28:04.404 [2024-11-25 14:26:09.345777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1396790 (9): Bad file descriptor 00:28:04.404 [2024-11-25 14:26:09.345792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf71d30 (9): Bad file descriptor 00:28:04.404 [2024-11-25 14:26:09.345808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf68990 (9): Bad file descriptor 00:28:04.404 [2024-11-25 14:26:09.345825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf67920 (9): Bad file descriptor 00:28:04.404 [2024-11-25 14:26:09.345844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ebc90 (9): Bad file descriptor 00:28:04.404 [2024-11-25 14:26:09.345859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe89610 (9): Bad file descriptor 00:28:04.404 [2024-11-25 14:26:09.345876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf70040 (9): Bad file descriptor 00:28:04.404 [2024-11-25 14:26:09.345895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c3ce0 (9): Bad file descriptor 00:28:04.404 [2024-11-25 14:26:09.346036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.346049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.346065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.346073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.346083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.346091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.346104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.404 [2024-11-25 14:26:09.346111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.404 [2024-11-25 14:26:09.346122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.405 [2024-11-25 14:26:09.346828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.405 [2024-11-25 14:26:09.346836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.406 [2024-11-25 14:26:09.346845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.406 [2024-11-25 14:26:09.346853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.406 [2024-11-25 14:26:09.346863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.406 [2024-11-25 14:26:09.346870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.406 [2024-11-25 14:26:09.346880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.406 [2024-11-25 14:26:09.346887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.406 [2024-11-25 14:26:09.346897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.406 [2024-11-25 14:26:09.346905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.406 [2024-11-25 14:26:09.346915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.406 [2024-11-25 14:26:09.346922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.406 [2024-11-25 14:26:09.346934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.406 [2024-11-25 14:26:09.346942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.406 [2024-11-25 14:26:09.346952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.406 [2024-11-25 14:26:09.346960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.406 [2024-11-25 14:26:09.346969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.406 [2024-11-25 14:26:09.346977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.406 [2024-11-25 14:26:09.346987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.406 [2024-11-25 14:26:09.346995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.406 [2024-11-25 14:26:09.347004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.406 [2024-11-25 14:26:09.347011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.406 [2024-11-25 14:26:09.347022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.406 [2024-11-25 14:26:09.347030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.406 [2024-11-25 14:26:09.347039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.406 [2024-11-25 14:26:09.347046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.406 [2024-11-25 14:26:09.347056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.406 [2024-11-25 14:26:09.347064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.406 [2024-11-25 14:26:09.347074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.406 [2024-11-25 14:26:09.347081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.406 [2024-11-25 14:26:09.347090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.406 [2024-11-25 14:26:09.347098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.406 [2024-11-25 14:26:09.347108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.406 [2024-11-25 14:26:09.347115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.406 [2024-11-25 14:26:09.347125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.406 [2024-11-25 14:26:09.347132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.406 [2024-11-25 14:26:09.347141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.406 [2024-11-25 14:26:09.347150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.406 [2024-11-25 14:26:09.347165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.406 [2024-11-25 14:26:09.347173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.406 [2024-11-25 14:26:09.347183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.406 [2024-11-25 14:26:09.347191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.406 [2024-11-25 14:26:09.347200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149c1e0 is same with the state(6) to be set 00:28:04.406 [2024-11-25 14:26:09.353892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:04.406 [2024-11-25 14:26:09.353924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:04.406 [2024-11-25 14:26:09.354941] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:04.406 [2024-11-25 14:26:09.354997] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:04.406 [2024-11-25 14:26:09.355039] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:04.406 [2024-11-25 14:26:09.355054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:04.406 [2024-11-25 14:26:09.355071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:04.406 [2024-11-25 14:26:09.355536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.406 [2024-11-25 14:26:09.355580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ebc90 with addr=10.0.0.2, port=4420 00:28:04.406 [2024-11-25 14:26:09.355592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ebc90 is same with the state(6) to be set 00:28:04.406 [2024-11-25 14:26:09.355779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.406 [2024-11-25 14:26:09.355794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe89610 with addr=10.0.0.2, port=4420 00:28:04.406 [2024-11-25 14:26:09.355802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe89610 is same with the state(6) to be set 00:28:04.406 [2024-11-25 14:26:09.355866] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:04.406 [2024-11-25 14:26:09.356472] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:04.406 [2024-11-25 14:26:09.356548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:04.406 [2024-11-25 14:26:09.356719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.406 [2024-11-25 14:26:09.356735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c3ce0 with addr=10.0.0.2, port=4420 00:28:04.406 [2024-11-25 14:26:09.356743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ce0 is same with the state(6) to be set 00:28:04.406 [2024-11-25 14:26:09.356912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.406 [2024-11-25 14:26:09.356922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf67920 with addr=10.0.0.2, port=4420 00:28:04.406 [2024-11-25 14:26:09.356930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67920 is same with the state(6) to be set 00:28:04.406 [2024-11-25 14:26:09.356942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ebc90 (9): Bad file descriptor 00:28:04.406 [2024-11-25 14:26:09.356954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe89610 (9): Bad file descriptor 00:28:04.406 [2024-11-25 14:26:09.357425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.406 [2024-11-25 14:26:09.357440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf68990 with addr=10.0.0.2, port=4420 00:28:04.406 [2024-11-25 14:26:09.357448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf68990 is same with the state(6) to be set 00:28:04.406 [2024-11-25 14:26:09.357458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c3ce0 (9): Bad file descriptor 00:28:04.406 [2024-11-25 14:26:09.357468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf67920 (9): Bad file descriptor 00:28:04.406 [2024-11-25 14:26:09.357477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:04.406 [2024-11-25 14:26:09.357483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:04.406 [2024-11-25 14:26:09.357492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:04.406 [2024-11-25 14:26:09.357501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:04.406 [2024-11-25 14:26:09.357510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:04.406 [2024-11-25 14:26:09.357517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:04.406 [2024-11-25 14:26:09.357524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:04.406 [2024-11-25 14:26:09.357531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:04.406 [2024-11-25 14:26:09.357566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.406 [2024-11-25 14:26:09.357577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.406 [2024-11-25 14:26:09.357592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.406 [2024-11-25 14:26:09.357600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.406 [2024-11-25 14:26:09.357612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.406 [2024-11-25 14:26:09.357620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.406 [2024-11-25 14:26:09.357630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.406 [2024-11-25 14:26:09.357638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.357649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.357657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.357666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.357674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.357684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.357692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.357706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.357714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.357724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.357731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.357741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.357749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.357758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.357766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.357776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.357784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.357793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.357801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.357811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.357819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.357828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.357835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.357846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.357853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.357863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.357870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.357880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.357888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.357897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.357904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.357915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.357924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.357934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.357942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.357952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.357959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.357968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.357977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.357986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.357994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.358003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.358011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.358020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.358028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.358039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.358047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.358057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.358064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.358075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.358082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.358092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.358100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.358112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.358119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.358130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.358138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.358149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.358164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.358174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.358182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.358191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.358199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.358209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.358217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.358228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.358236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.358246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.358253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.358263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.407 [2024-11-25 14:26:09.358271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.407 [2024-11-25 14:26:09.358281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.358289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.358299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.358307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.358317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.358325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.358334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.358342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.358352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.358360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.358370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.358380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.358389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.358397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.358407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.358415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.358425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.358432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.358443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.358451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.358460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.358468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.358478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.358485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.358495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.358503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.358513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.358520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.358530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.358538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.358547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.358555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.358565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.358573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.358582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.358590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.358601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.358609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.358618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.358626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.358636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.358643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.358653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.358661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.358670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.358677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.358687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.358695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.358704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.358712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.358721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14999e0 is same with the state(6) to be set 00:28:04.408 [2024-11-25 14:26:09.360020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.360035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.360048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.360057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.360069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.360079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.360090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.360100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.360110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.360117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.360134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.360142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.360153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.360165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.360175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.360183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.360193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.360201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.360210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.360218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.360228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.360236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.360245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.360253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.360263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.360272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.360283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.408 [2024-11-25 14:26:09.360291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.408 [2024-11-25 14:26:09.360300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.409 [2024-11-25 14:26:09.360984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.409 [2024-11-25 14:26:09.360992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.361001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.361009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.361019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.361027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.361037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.361045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.361054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.361061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.361071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.361079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.361088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.361096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.361106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.361113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.361122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.361132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.361142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.361149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.361165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.361173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.361182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149d6c0 is same with the state(6) to be set 00:28:04.410 [2024-11-25 14:26:09.362454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.410 [2024-11-25 14:26:09.362985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.410 [2024-11-25 14:26:09.362995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.363609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.363617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22bdb40 is same with the state(6) to be set 00:28:04.411 [2024-11-25 14:26:09.364880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.364893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.364904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.364915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.364925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.364933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.364943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.364950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.411 [2024-11-25 14:26:09.364960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.411 [2024-11-25 14:26:09.364968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.364978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.364985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.364996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.412 [2024-11-25 14:26:09.365595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.412 [2024-11-25 14:26:09.365605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.365613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.365622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.365630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.365640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.365648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.365657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.365665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.365675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.365682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.365692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.365700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.365710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.365717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.365727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.365734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.365744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.365752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.365762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.365770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.365780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.365788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.365798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.365807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.365817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.365824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.365833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.365842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.365851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.365859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.365869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.365876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.365886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.365894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.365903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.365911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.365920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.365928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.365938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.365946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.365955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.365963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.365972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.365980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.365990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.365997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.366007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.366015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.366029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1176150 is same with the state(6) to be set 00:28:04.413 [2024-11-25 14:26:09.367320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.367333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.367346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.367356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.367366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.367374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.367384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.367392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.367402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.367410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.367419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.367427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.367437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.367444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.367455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.367463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.367472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.367480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.367489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.367497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.367507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.367514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.367524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.367532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.367544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.367552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.367562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.367569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.367579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.367587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.413 [2024-11-25 14:26:09.367598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.413 [2024-11-25 14:26:09.367606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.367615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.367623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.367633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.367640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.367650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.367658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.367667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.367675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.367684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.367692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.367702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.367709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.367718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.367726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.367735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.367742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.367752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.367761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.367771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.367779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.367788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.367796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.367805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.367814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.367824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.367832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.367841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.367849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.367859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.367866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.367876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.367883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.367894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.367902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.367911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.367919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.367929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.367936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.367946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.367954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.367963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.367971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.367983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.367991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.368000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.368008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.368018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.368026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.368036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.368044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.368053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.368061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.368071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.368079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.368088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.368096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.368105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.368113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.368123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.368130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.368141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.368149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.368163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.368172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.368181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.368189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.368198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.368208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.368218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.368226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.368235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.368243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.368253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.368261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.368270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.368278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.368288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.368296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.414 [2024-11-25 14:26:09.368306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.414 [2024-11-25 14:26:09.368315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.415 [2024-11-25 14:26:09.368324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.415 [2024-11-25 14:26:09.368331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.415 [2024-11-25 14:26:09.368341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.415 [2024-11-25 14:26:09.368349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.415 [2024-11-25 14:26:09.368358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.415 [2024-11-25 14:26:09.368366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.415 [2024-11-25 14:26:09.368376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.415 [2024-11-25 14:26:09.368384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.415 [2024-11-25 14:26:09.368394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.415 [2024-11-25 14:26:09.368402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.415 [2024-11-25 14:26:09.368412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.415 [2024-11-25 14:26:09.368420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.415 [2024-11-25 14:26:09.368431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.415 [2024-11-25 14:26:09.368439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.415 [2024-11-25 14:26:09.368449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.415 [2024-11-25 14:26:09.368456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.415 [2024-11-25 14:26:09.368464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11776f0 is same with the state(6) to be set 00:28:04.415 [2024-11-25 14:26:09.369972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:04.415 [2024-11-25 14:26:09.369994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:04.415 [2024-11-25 14:26:09.370006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:04.415 [2024-11-25 14:26:09.370019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:04.415 [2024-11-25 14:26:09.370062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf68990 (9): Bad file descriptor 00:28:04.415 [2024-11-25 14:26:09.370074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:04.415 [2024-11-25 14:26:09.370082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:04.415 [2024-11-25 14:26:09.370091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:04.415 [2024-11-25 14:26:09.370100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:04.415 [2024-11-25 14:26:09.370109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:04.415 [2024-11-25 14:26:09.370117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:04.415 [2024-11-25 14:26:09.370125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:04.415 [2024-11-25 14:26:09.370132] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:04.415 [2024-11-25 14:26:09.370191] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:28:04.415 [2024-11-25 14:26:09.370209] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:28:04.415 task offset: 24576 on job bdev=Nvme5n1 fails 00:28:04.415 00:28:04.415 Latency(us) 00:28:04.415 [2024-11-25T13:26:09.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.415 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:04.415 Job: Nvme1n1 ended in about 0.98 seconds with error 00:28:04.415 Verification LBA range: start 0x0 length 0x400 00:28:04.415 Nvme1n1 : 0.98 196.32 12.27 65.44 0.00 241844.27 17585.49 248162.99 00:28:04.415 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:04.415 Verification LBA range: start 0x0 length 0x400 00:28:04.415 Nvme2n1 : 0.97 197.81 12.36 0.00 0.00 313684.48 26651.31 267386.88 00:28:04.415 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:04.415 Job: Nvme3n1 ended in about 0.97 seconds with error 00:28:04.415 Verification LBA range: start 0x0 length 0x400 00:28:04.415 Nvme3n1 : 0.97 197.58 12.35 65.86 0.00 230845.65 19660.80 228939.09 00:28:04.415 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:04.415 Job: Nvme4n1 ended in about 0.98 seconds with error 00:28:04.415 Verification LBA range: start 0x0 length 0x400 00:28:04.415 Nvme4n1 : 0.98 195.83 12.24 65.28 0.00 228264.85 10485.76 258648.75 00:28:04.415 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:04.415 Job: Nvme5n1 ended in about 0.97 seconds with error 00:28:04.415 Verification LBA range: start 0x0 length 0x400 00:28:04.415 Nvme5n1 : 0.97 198.64 12.42 66.21 0.00 220070.40 17367.04 248162.99 00:28:04.415 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:04.415 Job: Nvme6n1 ended in about 0.97 seconds with error 00:28:04.415 Verification LBA range: start 0x0 length 0x400 00:28:04.415 Nvme6n1 : 0.97 198.41 12.40 66.14 0.00 215631.79 36700.16 223696.21 00:28:04.415 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:04.415 Job: Nvme7n1 ended in about 0.97 seconds with error 00:28:04.415 Verification LBA range: start 0x0 length 0x400 00:28:04.415 Nvme7n1 : 0.97 198.16 12.39 66.05 0.00 211192.32 13981.01 253405.87 00:28:04.415 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:04.415 Job: Nvme8n1 ended in about 0.98 seconds with error 00:28:04.415 Verification LBA range: start 0x0 length 0x400 00:28:04.415 Nvme8n1 : 0.98 199.41 12.46 65.11 0.00 206744.65 19660.80 256901.12 00:28:04.415 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:04.415 Job: Nvme9n1 ended in about 0.99 seconds with error 00:28:04.415 Verification LBA range: start 0x0 length 0x400 00:28:04.415 Nvme9n1 : 0.99 129.91 8.12 64.96 0.00 274568.53 16820.91 267386.88 00:28:04.415 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:04.415 Job: Nvme10n1 ended in about 0.99 seconds with error 00:28:04.415 Verification LBA range: start 0x0 length 0x400 00:28:04.415 Nvme10n1 : 0.99 129.59 8.10 64.80 0.00 269143.32 20643.84 248162.99 00:28:04.415 [2024-11-25T13:26:09.505Z] =================================================================================================================== 00:28:04.415 [2024-11-25T13:26:09.505Z] Total : 1841.66 115.10 589.84 0.00 237530.83 10485.76 267386.88 00:28:04.415 [2024-11-25 14:26:09.394169] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:04.415 [2024-11-25 14:26:09.394201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:04.415 [2024-11-25 14:26:09.394612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.415 [2024-11-25 14:26:09.394629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf71d30 with addr=10.0.0.2, port=4420 00:28:04.415 [2024-11-25 14:26:09.394639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf71d30 is same with the state(6) to be set 00:28:04.415 [2024-11-25 14:26:09.394961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.415 [2024-11-25 14:26:09.394972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf70040 with addr=10.0.0.2, port=4420 00:28:04.415 [2024-11-25 14:26:09.394980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf70040 is same with the state(6) to be set 00:28:04.415 [2024-11-25 14:26:09.395291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.415 [2024-11-25 14:26:09.395304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c2fd0 with addr=10.0.0.2, port=4420 00:28:04.415 [2024-11-25 14:26:09.395312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c2fd0 is same with the state(6) to be set 00:28:04.415 [2024-11-25 14:26:09.395616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.415 [2024-11-25 14:26:09.395628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cfdb0 with addr=10.0.0.2, port=4420 00:28:04.415 [2024-11-25 14:26:09.395635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cfdb0 is same with the state(6) to be set 00:28:04.415 [2024-11-25 14:26:09.395644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:04.415 [2024-11-25 14:26:09.395656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:04.415 [2024-11-25 14:26:09.395666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:04.415 [2024-11-25 14:26:09.395675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:04.415 [2024-11-25 14:26:09.397029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:04.415 [2024-11-25 14:26:09.397044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:04.415 [2024-11-25 14:26:09.397053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:04.415 [2024-11-25 14:26:09.397064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:04.415 [2024-11-25 14:26:09.397292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.415 [2024-11-25 14:26:09.397306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1396790 with addr=10.0.0.2, port=4420 00:28:04.415 [2024-11-25 14:26:09.397314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1396790 is same with the state(6) to be set 00:28:04.415 [2024-11-25 14:26:09.397326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf71d30 (9): Bad file descriptor 00:28:04.415 [2024-11-25 14:26:09.397339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf70040 (9): Bad file descriptor 00:28:04.415 [2024-11-25 14:26:09.397350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c2fd0 (9): Bad file descriptor 00:28:04.416 [2024-11-25 14:26:09.397359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13cfdb0 (9): Bad file descriptor 00:28:04.416 [2024-11-25 14:26:09.397398] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:28:04.416 [2024-11-25 14:26:09.397411] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:28:04.416 [2024-11-25 14:26:09.397423] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:28:04.416 [2024-11-25 14:26:09.397434] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:28:04.416 [2024-11-25 14:26:09.398033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.416 [2024-11-25 14:26:09.398051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe89610 with addr=10.0.0.2, port=4420 00:28:04.416 [2024-11-25 14:26:09.398059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe89610 is same with the state(6) to be set 00:28:04.416 [2024-11-25 14:26:09.398340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.416 [2024-11-25 14:26:09.398353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13ebc90 with addr=10.0.0.2, port=4420 00:28:04.416 [2024-11-25 14:26:09.398361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ebc90 is same with the state(6) to be set 00:28:04.416 [2024-11-25 14:26:09.398651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.416 [2024-11-25 14:26:09.398662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf67920 with addr=10.0.0.2, port=4420 00:28:04.416 [2024-11-25 14:26:09.398670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf67920 is same with the state(6) to be set 00:28:04.416 [2024-11-25 14:26:09.398986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.416 [2024-11-25 14:26:09.398997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c3ce0 with addr=10.0.0.2, port=4420 00:28:04.416 [2024-11-25 14:26:09.399008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3ce0 is same with the state(6) to be set 00:28:04.416 [2024-11-25 14:26:09.399019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1396790 (9): Bad file descriptor 00:28:04.416 [2024-11-25 14:26:09.399029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:04.416 [2024-11-25 14:26:09.399036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:04.416 [2024-11-25 14:26:09.399044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:04.416 [2024-11-25 14:26:09.399052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:04.416 [2024-11-25 14:26:09.399060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:04.416 [2024-11-25 14:26:09.399067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:04.416 [2024-11-25 14:26:09.399075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:04.416 [2024-11-25 14:26:09.399082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:04.416 [2024-11-25 14:26:09.399090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:04.416 [2024-11-25 14:26:09.399097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:04.416 [2024-11-25 14:26:09.399105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:04.416 [2024-11-25 14:26:09.399111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:04.416 [2024-11-25 14:26:09.399119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:04.416 [2024-11-25 14:26:09.399126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:04.416 [2024-11-25 14:26:09.399133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:04.416 [2024-11-25 14:26:09.399140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:04.416 [2024-11-25 14:26:09.399216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:04.416 [2024-11-25 14:26:09.399238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe89610 (9): Bad file descriptor 00:28:04.416 [2024-11-25 14:26:09.399250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13ebc90 (9): Bad file descriptor 00:28:04.416 [2024-11-25 14:26:09.399260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf67920 (9): Bad file descriptor 00:28:04.416 [2024-11-25 14:26:09.399270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c3ce0 (9): Bad file descriptor 00:28:04.416 [2024-11-25 14:26:09.399278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:04.416 [2024-11-25 14:26:09.399285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:04.416 [2024-11-25 14:26:09.399292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:04.416 [2024-11-25 14:26:09.399299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:04.416 [2024-11-25 14:26:09.399524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.416 [2024-11-25 14:26:09.399542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf68990 with addr=10.0.0.2, port=4420 00:28:04.416 [2024-11-25 14:26:09.399550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf68990 is same with the state(6) to be set 00:28:04.416 [2024-11-25 14:26:09.399558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:04.416 [2024-11-25 14:26:09.399564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:04.416 [2024-11-25 14:26:09.399572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:04.416 [2024-11-25 14:26:09.399579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:04.416 [2024-11-25 14:26:09.399587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:04.416 [2024-11-25 14:26:09.399594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:04.416 [2024-11-25 14:26:09.399601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:04.416 [2024-11-25 14:26:09.399608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:04.416 [2024-11-25 14:26:09.399616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:04.416 [2024-11-25 14:26:09.399622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:04.416 [2024-11-25 14:26:09.399630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:04.416 [2024-11-25 14:26:09.399636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:04.416 [2024-11-25 14:26:09.399644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:04.416 [2024-11-25 14:26:09.399650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:04.416 [2024-11-25 14:26:09.399658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:04.416 [2024-11-25 14:26:09.399665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:04.416 [2024-11-25 14:26:09.399693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf68990 (9): Bad file descriptor 00:28:04.416 [2024-11-25 14:26:09.399724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:04.416 [2024-11-25 14:26:09.399732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:04.416 [2024-11-25 14:26:09.399740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:04.416 [2024-11-25 14:26:09.399746] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:04.677 14:26:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3502738 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3502738 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3502738 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:05.619 rmmod nvme_tcp 00:28:05.619 rmmod nvme_fabrics 00:28:05.619 rmmod nvme_keyring 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3502356 ']' 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3502356 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3502356 ']' 00:28:05.619 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3502356 00:28:05.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3502356) - No such process 00:28:05.620 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3502356 is not found' 00:28:05.620 Process with pid 3502356 is not found 00:28:05.620 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:05.620 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:05.620 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:05.620 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:28:05.620 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:28:05.620 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:05.620 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:28:05.620 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:05.620 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:05.620 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.620 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:05.620 14:26:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:08.173 00:28:08.173 real 0m7.738s 00:28:08.173 user 0m18.977s 00:28:08.173 sys 0m1.252s 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:08.173 ************************************ 00:28:08.173 END TEST nvmf_shutdown_tc3 00:28:08.173 ************************************ 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:08.173 ************************************ 00:28:08.173 START TEST nvmf_shutdown_tc4 00:28:08.173 ************************************ 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:08.173 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:08.173 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:08.173 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:08.173 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:08.173 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.174 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:08.174 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:08.174 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:08.174 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:08.174 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:08.174 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:08.174 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:08.174 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:08.174 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:08.174 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:08.174 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:08.174 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:08.174 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:08.174 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:08.174 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:08.174 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:08.174 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:08.174 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:08.174 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:08.174 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:08.174 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:08.174 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:08.174 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:08.174 14:26:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:08.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:08.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:28:08.174 00:28:08.174 --- 10.0.0.2 ping statistics --- 00:28:08.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.174 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:08.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:08.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:28:08.174 00:28:08.174 --- 10.0.0.1 ping statistics --- 00:28:08.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.174 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3503883 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3503883 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3503883 ']' 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:08.174 14:26:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:08.174 [2024-11-25 14:26:13.233068] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:28:08.174 [2024-11-25 14:26:13.233166] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:08.435 [2024-11-25 14:26:13.331595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:08.435 [2024-11-25 14:26:13.370465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:08.435 [2024-11-25 14:26:13.370503] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:08.435 [2024-11-25 14:26:13.370514] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:08.435 [2024-11-25 14:26:13.370519] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:08.435 [2024-11-25 14:26:13.370523] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:08.435 [2024-11-25 14:26:13.371976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:08.435 [2024-11-25 14:26:13.372133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:08.435 [2024-11-25 14:26:13.372292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:08.435 [2024-11-25 14:26:13.372422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.005 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:09.005 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:28:09.005 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:09.005 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:09.005 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:09.005 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:09.005 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:09.005 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.005 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:09.005 [2024-11-25 14:26:14.080852] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:09.005 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.005 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:09.005 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:09.005 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:09.005 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:09.265 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:09.265 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:09.265 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:09.265 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:09.265 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:09.265 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:09.265 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:09.265 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:09.265 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:09.265 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:09.265 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:09.265 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:09.265 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:09.265 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:09.265 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:09.265 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:09.265 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:09.265 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:09.265 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:09.265 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:09.265 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:09.265 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:09.265 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.265 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:09.265 Malloc1 00:28:09.265 [2024-11-25 14:26:14.191121] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:09.265 Malloc2 00:28:09.265 Malloc3 00:28:09.265 Malloc4 00:28:09.265 Malloc5 00:28:09.525 Malloc6 00:28:09.525 Malloc7 00:28:09.525 Malloc8 00:28:09.525 Malloc9 00:28:09.525 Malloc10 00:28:09.525 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.525 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:09.525 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:09.525 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:09.525 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3504261 00:28:09.525 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:28:09.525 14:26:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:28:09.786 [2024-11-25 14:26:14.669374] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:15.076 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:15.076 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3503883 00:28:15.076 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3503883 ']' 00:28:15.076 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3503883 00:28:15.076 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:28:15.076 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:15.076 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3503883 00:28:15.076 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:15.076 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:15.076 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3503883' 00:28:15.076 killing process with pid 3503883 00:28:15.076 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3503883 00:28:15.076 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3503883 00:28:15.076 [2024-11-25 14:26:19.671656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22993b0 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.671698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22993b0 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.671708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22993b0 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.671714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22993b0 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.671719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22993b0 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.671724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22993b0 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.671895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c0220 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.671925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c0220 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.671932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c0220 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.671937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c0220 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.671942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c0220 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.672129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c06f0 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.672151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c06f0 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.672157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c06f0 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.672167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c06f0 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.672172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c06f0 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.672177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c06f0 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.672572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2298ee0 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.672594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2298ee0 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.672601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2298ee0 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.672606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2298ee0 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.672611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2298ee0 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.672616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2298ee0 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.672621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2298ee0 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.672632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2298ee0 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.672638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2298ee0 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.672836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1090 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.672852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1090 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.672858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1090 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.672863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1090 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.672869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1090 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.672874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1090 is same with the state(6) to be set 00:28:15.076 [2024-11-25 14:26:19.672878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1090 is same with the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.673079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1560 is same with the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.673095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1560 is same with the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.673100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1560 is same with the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.673108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1560 is same with the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.673114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1560 is same with the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.673119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1560 is same with the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.673124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1560 is same with the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.673129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1560 is same with the state(6) to be set 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 starting I/O failed: -6 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 [2024-11-25 14:26:19.673395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1a30 is same with the state(6) to be set 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 [2024-11-25 14:26:19.673410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1a30 is same with the state(6) to be set 00:28:15.077 starting I/O failed: -6 00:28:15.077 [2024-11-25 14:26:19.673416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1a30 is same with the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.673421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1a30 is same with the state(6) to be set 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 [2024-11-25 14:26:19.673427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1a30 is same with the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.673432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1a30 is same with the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.673436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1a30 is same with the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.673441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1a30 is same with the state(6) to be set 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 starting I/O failed: -6 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 starting I/O failed: -6 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 starting I/O failed: -6 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 [2024-11-25 14:26:19.673650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c0bc0 is same with the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.673664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c0bc0 is same with the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.673669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c0bc0 is same with the state(6) to be set 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 [2024-11-25 14:26:19.673675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c0bc0 is same with the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.673680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c0bc0 is same with the state(6) to be set 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 starting I/O failed: -6 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 starting I/O failed: -6 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 starting I/O failed: -6 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 starting I/O failed: -6 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 starting I/O failed: -6 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 [2024-11-25 14:26:19.674043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:15.077 [2024-11-25 14:26:19.674133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c23d0 is same with the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.674147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c23d0 is same with the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.674152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c23d0 is same with the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.674157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c23d0 is same with the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.674165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c23d0 is same with the state(6) to be set 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 starting I/O failed: -6 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 starting I/O failed: -6 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 starting I/O failed: -6 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 starting I/O failed: -6 00:28:15.077 [2024-11-25 14:26:19.674419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c28a0 is same with Write completed with error (sct=0, sc=8) 00:28:15.077 the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.674436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c28a0 is same with the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.674441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c28a0 is same with Write completed with error (sct=0, sc=8) 00:28:15.077 the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.674453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c28a0 is same with the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.674459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c28a0 is same with the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.674463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c28a0 is same with the state(6) to be set 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 [2024-11-25 14:26:19.674469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c28a0 is same with the state(6) to be set 00:28:15.077 starting I/O failed: -6 00:28:15.077 [2024-11-25 14:26:19.674474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c28a0 is same with the state(6) to be set 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 starting I/O failed: -6 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 starting I/O failed: -6 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 starting I/O failed: -6 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 starting I/O failed: -6 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 starting I/O failed: -6 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 starting I/O failed: -6 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 starting I/O failed: -6 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 starting I/O failed: -6 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 starting I/O failed: -6 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 starting I/O failed: -6 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 starting I/O failed: -6 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 [2024-11-25 14:26:19.674831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c2d70 is same with the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.674843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c2d70 is same with the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.674848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c2d70 is same with the state(6) to be set 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 [2024-11-25 14:26:19.674853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c2d70 is same with the state(6) to be set 00:28:15.077 starting I/O failed: -6 00:28:15.077 [2024-11-25 14:26:19.674858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c2d70 is same with the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.674864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c2d70 is same with the state(6) to be set 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.077 [2024-11-25 14:26:19.674869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c2d70 is same with the state(6) to be set 00:28:15.077 starting I/O failed: -6 00:28:15.077 [2024-11-25 14:26:19.674874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c2d70 is same with the state(6) to be set 00:28:15.077 [2024-11-25 14:26:19.674879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c2d70 is same with the state(6) to be set 00:28:15.077 Write completed with error (sct=0, sc=8) 00:28:15.078 [2024-11-25 14:26:19.674884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c2d70 is same with the state(6) to be set 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 [2024-11-25 14:26:19.675063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1f00 is same with the state(6) to be set 00:28:15.078 [2024-11-25 14:26:19.675063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:15.078 [2024-11-25 14:26:19.675077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1f00 is same with the state(6) to be set 00:28:15.078 [2024-11-25 14:26:19.675083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1f00 is same with the state(6) to be set 00:28:15.078 [2024-11-25 14:26:19.675088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c1f00 is same with the state(6) to be set 00:28:15.078 starting I/O failed: -6 00:28:15.078 starting I/O failed: -6 00:28:15.078 starting I/O failed: -6 00:28:15.078 starting I/O failed: -6 00:28:15.078 starting I/O failed: -6 00:28:15.078 starting I/O failed: -6 00:28:15.078 starting I/O failed: -6 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 [2024-11-25 14:26:19.676211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.078 Write completed with error (sct=0, sc=8) 00:28:15.078 starting I/O failed: -6 00:28:15.079 [2024-11-25 14:26:19.677573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:15.079 NVMe io qpair process completion error 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 [2024-11-25 14:26:19.678665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 [2024-11-25 14:26:19.679469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 [2024-11-25 14:26:19.680404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:15.079 starting I/O failed: -6 00:28:15.079 starting I/O failed: -6 00:28:15.079 starting I/O failed: -6 00:28:15.079 starting I/O failed: -6 00:28:15.079 starting I/O failed: -6 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.079 Write completed with error (sct=0, sc=8) 00:28:15.079 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 [2024-11-25 14:26:19.682641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.080 NVMe io qpair process completion error 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 [2024-11-25 14:26:19.683660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 starting I/O failed: -6 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.080 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 [2024-11-25 14:26:19.684556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:15.081 starting I/O failed: -6 00:28:15.081 starting I/O failed: -6 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 [2024-11-25 14:26:19.685679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.081 starting I/O failed: -6 00:28:15.081 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 [2024-11-25 14:26:19.687738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:15.082 NVMe io qpair process completion error 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 [2024-11-25 14:26:19.689073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 [2024-11-25 14:26:19.689889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 [2024-11-25 14:26:19.690837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.082 starting I/O failed: -6 00:28:15.082 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 [2024-11-25 14:26:19.692333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:15.083 NVMe io qpair process completion error 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 [2024-11-25 14:26:19.693553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 starting I/O failed: -6 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.083 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 [2024-11-25 14:26:19.694401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 [2024-11-25 14:26:19.695347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 Write completed with error (sct=0, sc=8) 00:28:15.084 starting I/O failed: -6 00:28:15.084 [2024-11-25 14:26:19.698033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.084 NVMe io qpair process completion error 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 [2024-11-25 14:26:19.699113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 [2024-11-25 14:26:19.699942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:15.085 starting I/O failed: -6 00:28:15.085 starting I/O failed: -6 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 [2024-11-25 14:26:19.701080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.085 starting I/O failed: -6 00:28:15.085 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 [2024-11-25 14:26:19.702722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:15.086 NVMe io qpair process completion error 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 [2024-11-25 14:26:19.703726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:15.086 starting I/O failed: -6 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 starting I/O failed: -6 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.086 Write completed with error (sct=0, sc=8) 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 [2024-11-25 14:26:19.704555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 [2024-11-25 14:26:19.705490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.087 starting I/O failed: -6 00:28:15.087 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 [2024-11-25 14:26:19.707131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:15.088 NVMe io qpair process completion error 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 [2024-11-25 14:26:19.708625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 starting I/O failed: -6 00:28:15.088 starting I/O failed: -6 00:28:15.088 starting I/O failed: -6 00:28:15.088 starting I/O failed: -6 00:28:15.088 starting I/O failed: -6 00:28:15.088 starting I/O failed: -6 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 [2024-11-25 14:26:19.710493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.088 Write completed with error (sct=0, sc=8) 00:28:15.088 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 [2024-11-25 14:26:19.713336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:15.089 NVMe io qpair process completion error 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 [2024-11-25 14:26:19.714524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 [2024-11-25 14:26:19.715385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 Write completed with error (sct=0, sc=8) 00:28:15.089 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 [2024-11-25 14:26:19.716312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.090 starting I/O failed: -6 00:28:15.090 starting I/O failed: -6 00:28:15.090 starting I/O failed: -6 00:28:15.090 starting I/O failed: -6 00:28:15.090 starting I/O failed: -6 00:28:15.090 starting I/O failed: -6 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 [2024-11-25 14:26:19.718144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:15.090 NVMe io qpair process completion error 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.090 starting I/O failed: -6 00:28:15.090 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 [2024-11-25 14:26:19.719426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 [2024-11-25 14:26:19.720309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 [2024-11-25 14:26:19.721225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.091 Write completed with error (sct=0, sc=8) 00:28:15.091 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 Write completed with error (sct=0, sc=8) 00:28:15.092 starting I/O failed: -6 00:28:15.092 [2024-11-25 14:26:19.724339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:15.092 NVMe io qpair process completion error 00:28:15.092 Initializing NVMe Controllers 00:28:15.092 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:28:15.092 Controller IO queue size 128, less than required. 00:28:15.092 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:15.092 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:28:15.092 Controller IO queue size 128, less than required. 00:28:15.092 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:15.092 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:15.092 Controller IO queue size 128, less than required. 00:28:15.092 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:15.092 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:28:15.092 Controller IO queue size 128, less than required. 00:28:15.092 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:15.092 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:28:15.092 Controller IO queue size 128, less than required. 00:28:15.092 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:15.092 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:28:15.092 Controller IO queue size 128, less than required. 00:28:15.092 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:15.092 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:28:15.092 Controller IO queue size 128, less than required. 00:28:15.092 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:15.092 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:28:15.092 Controller IO queue size 128, less than required. 00:28:15.092 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:15.092 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:28:15.092 Controller IO queue size 128, less than required. 00:28:15.092 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:15.092 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:28:15.092 Controller IO queue size 128, less than required. 00:28:15.092 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:15.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:28:15.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:28:15.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:15.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:28:15.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:28:15.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:28:15.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:28:15.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:28:15.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:28:15.092 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:28:15.092 Initialization complete. Launching workers. 00:28:15.092 ======================================================== 00:28:15.092 Latency(us) 00:28:15.092 Device Information : IOPS MiB/s Average min max 00:28:15.092 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1914.04 82.24 66891.73 739.90 119287.95 00:28:15.092 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1871.03 80.40 68460.65 636.51 152471.23 00:28:15.092 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1895.19 81.43 67610.19 665.23 119795.59 00:28:15.092 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1874.00 80.52 68409.60 719.60 126381.33 00:28:15.092 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1910.86 82.11 67109.32 647.62 119189.01 00:28:15.092 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1901.75 81.72 67460.13 534.26 118965.51 00:28:15.092 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1908.11 81.99 67274.78 678.10 132842.14 00:28:15.092 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1894.13 81.39 67794.44 883.10 134787.93 00:28:15.092 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1858.11 79.84 68407.24 671.02 119074.85 00:28:15.092 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1898.57 81.58 66969.23 805.62 120743.11 00:28:15.092 ======================================================== 00:28:15.092 Total : 18925.79 813.22 67633.69 534.26 152471.23 00:28:15.092 00:28:15.092 [2024-11-25 14:26:19.728273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6edae0 is same with the state(6) to be set 00:28:15.092 [2024-11-25 14:26:19.728317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6eca70 is same with the state(6) to be set 00:28:15.092 [2024-11-25 14:26:19.728352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ed720 is same with the state(6) to be set 00:28:15.092 [2024-11-25 14:26:19.728382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6eb560 is same with the state(6) to be set 00:28:15.092 [2024-11-25 14:26:19.728414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ec740 is same with the state(6) to be set 00:28:15.093 [2024-11-25 14:26:19.728443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ebbc0 is same with the state(6) to be set 00:28:15.093 [2024-11-25 14:26:19.728472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ebef0 is same with the state(6) to be set 00:28:15.093 [2024-11-25 14:26:19.728500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ec410 is same with the state(6) to be set 00:28:15.093 [2024-11-25 14:26:19.728529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6eb890 is same with the state(6) to be set 00:28:15.093 [2024-11-25 14:26:19.728558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ed900 is same with the state(6) to be set 00:28:15.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:15.093 14:26:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:28:16.032 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3504261 00:28:16.032 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:28:16.032 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3504261 00:28:16.032 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:16.032 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:16.032 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:28:16.032 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:16.032 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3504261 00:28:16.032 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:28:16.032 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:16.032 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:16.032 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:16.032 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:28:16.032 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:16.033 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:16.033 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:16.033 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:16.033 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:16.033 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:28:16.033 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:16.033 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:28:16.033 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:16.033 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:16.033 rmmod nvme_tcp 00:28:16.033 rmmod nvme_fabrics 00:28:16.033 rmmod nvme_keyring 00:28:16.033 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:16.033 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:28:16.033 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:28:16.033 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3503883 ']' 00:28:16.033 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3503883 00:28:16.033 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3503883 ']' 00:28:16.033 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3503883 00:28:16.033 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3503883) - No such process 00:28:16.033 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3503883 is not found' 00:28:16.033 Process with pid 3503883 is not found 00:28:16.033 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:16.033 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:16.033 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:16.033 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:28:16.033 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:28:16.033 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:16.033 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:28:16.033 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:16.033 14:26:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:16.033 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.033 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.033 14:26:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:18.683 00:28:18.683 real 0m10.282s 00:28:18.683 user 0m28.009s 00:28:18.683 sys 0m3.988s 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:18.683 ************************************ 00:28:18.683 END TEST nvmf_shutdown_tc4 00:28:18.683 ************************************ 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:28:18.683 00:28:18.683 real 0m43.203s 00:28:18.683 user 1m43.996s 00:28:18.683 sys 0m13.856s 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:18.683 ************************************ 00:28:18.683 END TEST nvmf_shutdown 00:28:18.683 ************************************ 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:18.683 ************************************ 00:28:18.683 START TEST nvmf_nsid 00:28:18.683 ************************************ 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:28:18.683 * Looking for test storage... 00:28:18.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:18.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.683 --rc genhtml_branch_coverage=1 00:28:18.683 --rc genhtml_function_coverage=1 00:28:18.683 --rc genhtml_legend=1 00:28:18.683 --rc geninfo_all_blocks=1 00:28:18.683 --rc geninfo_unexecuted_blocks=1 00:28:18.683 00:28:18.683 ' 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:18.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.683 --rc genhtml_branch_coverage=1 00:28:18.683 --rc genhtml_function_coverage=1 00:28:18.683 --rc genhtml_legend=1 00:28:18.683 --rc geninfo_all_blocks=1 00:28:18.683 --rc geninfo_unexecuted_blocks=1 00:28:18.683 00:28:18.683 ' 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:18.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.683 --rc genhtml_branch_coverage=1 00:28:18.683 --rc genhtml_function_coverage=1 00:28:18.683 --rc genhtml_legend=1 00:28:18.683 --rc geninfo_all_blocks=1 00:28:18.683 --rc geninfo_unexecuted_blocks=1 00:28:18.683 00:28:18.683 ' 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:18.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.683 --rc genhtml_branch_coverage=1 00:28:18.683 --rc genhtml_function_coverage=1 00:28:18.683 --rc genhtml_legend=1 00:28:18.683 --rc geninfo_all_blocks=1 00:28:18.683 --rc geninfo_unexecuted_blocks=1 00:28:18.683 00:28:18.683 ' 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:18.683 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:18.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:28:18.684 14:26:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:26.859 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:26.859 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:26.859 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:26.859 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:26.859 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:26.860 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:26.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.667 ms 00:28:26.860 00:28:26.860 --- 10.0.0.2 ping statistics --- 00:28:26.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.860 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:26.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:26.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:28:26.860 00:28:26.860 --- 10.0.0.1 ping statistics --- 00:28:26.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:26.860 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3509620 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3509620 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3509620 ']' 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:26.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:26.860 14:26:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:26.860 [2024-11-25 14:26:30.986926] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:28:26.860 [2024-11-25 14:26:30.986988] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:26.860 [2024-11-25 14:26:31.089663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.860 [2024-11-25 14:26:31.141420] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:26.860 [2024-11-25 14:26:31.141474] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:26.860 [2024-11-25 14:26:31.141483] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:26.860 [2024-11-25 14:26:31.141490] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:26.860 [2024-11-25 14:26:31.141497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:26.860 [2024-11-25 14:26:31.142261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.860 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:26.860 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:28:26.860 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:26.860 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:26.860 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:26.860 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:26.860 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:26.860 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3509965 00:28:26.860 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:28:26.860 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:28:26.860 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:28:26.860 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:28:26.860 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.860 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.860 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.860 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.860 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.860 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.860 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.860 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.860 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.860 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:28:26.860 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:28:26.861 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=83807525-4478-4bb0-a28b-b342d04241d2 00:28:26.861 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:28:26.861 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=aaf9ccc8-15e4-425e-b4ca-0c5f83eff2c0 00:28:26.861 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:28:26.861 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=837e2f31-55af-499c-824c-d4a318bd6248 00:28:26.861 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:28:26.861 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.861 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:26.861 null0 00:28:26.861 null1 00:28:26.861 [2024-11-25 14:26:31.914295] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:28:26.861 [2024-11-25 14:26:31.914361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3509965 ] 00:28:26.861 null2 00:28:26.861 [2024-11-25 14:26:31.919183] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:26.861 [2024-11-25 14:26:31.943518] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:27.121 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.121 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3509965 /var/tmp/tgt2.sock 00:28:27.121 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3509965 ']' 00:28:27.121 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:28:27.121 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:27.121 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:28:27.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:28:27.121 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:27.121 14:26:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:27.121 [2024-11-25 14:26:31.991604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.121 [2024-11-25 14:26:32.044484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.382 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:27.382 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:28:27.382 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:28:27.643 [2024-11-25 14:26:32.607398] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:27.643 [2024-11-25 14:26:32.623595] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:28:27.643 nvme0n1 nvme0n2 00:28:27.643 nvme1n1 00:28:27.643 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:28:27.643 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:28:27.643 14:26:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:29.029 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:28:29.029 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:28:29.029 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:28:29.029 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:28:29.029 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:28:29.029 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:28:29.029 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:28:29.029 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:29.029 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:29.029 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:28:29.029 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:28:29.029 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:28:29.029 14:26:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 83807525-4478-4bb0-a28b-b342d04241d2 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8380752544784bb0a28bb342d04241d2 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8380752544784BB0A28BB342D04241D2 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 8380752544784BB0A28BB342D04241D2 == \8\3\8\0\7\5\2\5\4\4\7\8\4\B\B\0\A\2\8\B\B\3\4\2\D\0\4\2\4\1\D\2 ]] 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid aaf9ccc8-15e4-425e-b4ca-0c5f83eff2c0 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=aaf9ccc815e4425eb4ca0c5f83eff2c0 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo AAF9CCC815E4425EB4CA0C5F83EFF2C0 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ AAF9CCC815E4425EB4CA0C5F83EFF2C0 == \A\A\F\9\C\C\C\8\1\5\E\4\4\2\5\E\B\4\C\A\0\C\5\F\8\3\E\F\F\2\C\0 ]] 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:28:30.416 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:30.417 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:28:30.417 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:30.417 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:30.417 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:28:30.417 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:30.417 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 837e2f31-55af-499c-824c-d4a318bd6248 00:28:30.417 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:30.417 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:28:30.417 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:28:30.417 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:28:30.417 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:30.417 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=837e2f3155af499c824cd4a318bd6248 00:28:30.417 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 837E2F3155AF499C824CD4A318BD6248 00:28:30.417 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 837E2F3155AF499C824CD4A318BD6248 == \8\3\7\E\2\F\3\1\5\5\A\F\4\9\9\C\8\2\4\C\D\4\A\3\1\8\B\D\6\2\4\8 ]] 00:28:30.417 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:28:30.678 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:28:30.678 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:28:30.678 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3509965 00:28:30.678 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3509965 ']' 00:28:30.678 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3509965 00:28:30.678 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:28:30.678 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:30.678 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3509965 00:28:30.678 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:30.679 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:30.679 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3509965' 00:28:30.679 killing process with pid 3509965 00:28:30.679 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3509965 00:28:30.679 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3509965 00:28:30.940 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:28:30.940 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:30.940 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:28:30.940 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:30.940 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:28:30.940 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:30.940 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:30.940 rmmod nvme_tcp 00:28:30.940 rmmod nvme_fabrics 00:28:30.940 rmmod nvme_keyring 00:28:30.940 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:30.940 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:28:30.940 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:28:30.940 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3509620 ']' 00:28:30.940 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3509620 00:28:30.940 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3509620 ']' 00:28:30.940 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3509620 00:28:30.940 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:28:30.940 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:30.940 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3509620 00:28:30.941 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:30.941 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:30.941 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3509620' 00:28:30.941 killing process with pid 3509620 00:28:30.941 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3509620 00:28:30.941 14:26:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3509620 00:28:31.201 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:31.201 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:31.201 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:31.201 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:28:31.201 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:28:31.201 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:31.201 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:28:31.201 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:31.201 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:31.201 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.201 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:31.201 14:26:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.115 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:33.115 00:28:33.115 real 0m14.930s 00:28:33.115 user 0m11.452s 00:28:33.115 sys 0m6.813s 00:28:33.115 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:33.115 14:26:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:33.115 ************************************ 00:28:33.115 END TEST nvmf_nsid 00:28:33.115 ************************************ 00:28:33.115 14:26:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:33.115 00:28:33.115 real 13m6.770s 00:28:33.115 user 27m25.966s 00:28:33.115 sys 3m55.719s 00:28:33.115 14:26:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:33.115 14:26:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:33.115 ************************************ 00:28:33.115 END TEST nvmf_target_extra 00:28:33.115 ************************************ 00:28:33.378 14:26:38 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:33.378 14:26:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:33.378 14:26:38 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:33.378 14:26:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:33.378 ************************************ 00:28:33.378 START TEST nvmf_host 00:28:33.378 ************************************ 00:28:33.378 14:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:33.378 * Looking for test storage... 00:28:33.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:33.378 14:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:33.378 14:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:28:33.378 14:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:33.378 14:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:33.378 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:33.378 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:33.378 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:33.378 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:33.378 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:33.378 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:33.378 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:33.378 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:33.378 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:33.378 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:33.378 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:33.378 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:28:33.378 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:28:33.378 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:33.378 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:33.378 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:28:33.378 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:28:33.378 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:33.378 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:28:33.378 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:33.640 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:28:33.640 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:28:33.640 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:33.640 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:28:33.640 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:33.640 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:33.640 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:33.640 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:28:33.640 14:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:33.640 14:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:33.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.640 --rc genhtml_branch_coverage=1 00:28:33.640 --rc genhtml_function_coverage=1 00:28:33.640 --rc genhtml_legend=1 00:28:33.640 --rc geninfo_all_blocks=1 00:28:33.640 --rc geninfo_unexecuted_blocks=1 00:28:33.640 00:28:33.640 ' 00:28:33.640 14:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:33.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.640 --rc genhtml_branch_coverage=1 00:28:33.640 --rc genhtml_function_coverage=1 00:28:33.640 --rc genhtml_legend=1 00:28:33.640 --rc geninfo_all_blocks=1 00:28:33.640 --rc geninfo_unexecuted_blocks=1 00:28:33.640 00:28:33.640 ' 00:28:33.640 14:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:33.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.640 --rc genhtml_branch_coverage=1 00:28:33.640 --rc genhtml_function_coverage=1 00:28:33.640 --rc genhtml_legend=1 00:28:33.640 --rc geninfo_all_blocks=1 00:28:33.640 --rc geninfo_unexecuted_blocks=1 00:28:33.641 00:28:33.641 ' 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:33.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.641 --rc genhtml_branch_coverage=1 00:28:33.641 --rc genhtml_function_coverage=1 00:28:33.641 --rc genhtml_legend=1 00:28:33.641 --rc geninfo_all_blocks=1 00:28:33.641 --rc geninfo_unexecuted_blocks=1 00:28:33.641 00:28:33.641 ' 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:33.641 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.641 ************************************ 00:28:33.641 START TEST nvmf_multicontroller 00:28:33.641 ************************************ 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:33.641 * Looking for test storage... 00:28:33.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:28:33.641 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:33.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.903 --rc genhtml_branch_coverage=1 00:28:33.903 --rc genhtml_function_coverage=1 00:28:33.903 --rc genhtml_legend=1 00:28:33.903 --rc geninfo_all_blocks=1 00:28:33.903 --rc geninfo_unexecuted_blocks=1 00:28:33.903 00:28:33.903 ' 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:33.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.903 --rc genhtml_branch_coverage=1 00:28:33.903 --rc genhtml_function_coverage=1 00:28:33.903 --rc genhtml_legend=1 00:28:33.903 --rc geninfo_all_blocks=1 00:28:33.903 --rc geninfo_unexecuted_blocks=1 00:28:33.903 00:28:33.903 ' 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:33.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.903 --rc genhtml_branch_coverage=1 00:28:33.903 --rc genhtml_function_coverage=1 00:28:33.903 --rc genhtml_legend=1 00:28:33.903 --rc geninfo_all_blocks=1 00:28:33.903 --rc geninfo_unexecuted_blocks=1 00:28:33.903 00:28:33.903 ' 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:33.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.903 --rc genhtml_branch_coverage=1 00:28:33.903 --rc genhtml_function_coverage=1 00:28:33.903 --rc genhtml_legend=1 00:28:33.903 --rc geninfo_all_blocks=1 00:28:33.903 --rc geninfo_unexecuted_blocks=1 00:28:33.903 00:28:33.903 ' 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.903 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:33.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:28:33.904 14:26:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:42.048 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:42.048 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:42.048 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:42.048 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:42.048 14:26:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:42.048 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:42.048 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:42.048 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:42.048 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:42.048 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:42.048 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:42.049 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:42.049 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:42.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:42.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:28:42.049 00:28:42.049 --- 10.0.0.2 ping statistics --- 00:28:42.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.049 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:28:42.049 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:42.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:42.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:28:42.049 00:28:42.049 --- 10.0.0.1 ping statistics --- 00:28:42.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.049 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:28:42.049 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:42.049 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:28:42.049 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:42.049 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:42.049 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:42.049 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:42.049 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:42.049 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:42.049 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:42.049 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:42.049 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:42.049 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:42.049 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:42.049 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3515066 00:28:42.049 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3515066 00:28:42.049 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:42.049 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3515066 ']' 00:28:42.049 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.049 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:42.049 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.049 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:42.049 14:26:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:42.049 [2024-11-25 14:26:46.364776] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:28:42.049 [2024-11-25 14:26:46.364844] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:42.049 [2024-11-25 14:26:46.465865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:42.049 [2024-11-25 14:26:46.521244] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.049 [2024-11-25 14:26:46.521296] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.049 [2024-11-25 14:26:46.521305] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:42.049 [2024-11-25 14:26:46.521313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:42.049 [2024-11-25 14:26:46.521319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.049 [2024-11-25 14:26:46.523205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:42.049 [2024-11-25 14:26:46.523383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.049 [2024-11-25 14:26:46.523383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:42.311 [2024-11-25 14:26:47.245365] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:42.311 Malloc0 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:42.311 [2024-11-25 14:26:47.326070] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:42.311 [2024-11-25 14:26:47.337979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:42.311 Malloc1 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.311 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:42.573 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.573 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:42.573 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.573 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:42.573 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.573 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3515179 00:28:42.573 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:42.573 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:42.573 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3515179 /var/tmp/bdevperf.sock 00:28:42.573 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3515179 ']' 00:28:42.573 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:42.573 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:42.573 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:42.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:42.573 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:42.573 14:26:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:43.518 NVMe0n1 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.518 1 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:43.518 request: 00:28:43.518 { 00:28:43.518 "name": "NVMe0", 00:28:43.518 "trtype": "tcp", 00:28:43.518 "traddr": "10.0.0.2", 00:28:43.518 "adrfam": "ipv4", 00:28:43.518 "trsvcid": "4420", 00:28:43.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:43.518 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:43.518 "hostaddr": "10.0.0.1", 00:28:43.518 "prchk_reftag": false, 00:28:43.518 "prchk_guard": false, 00:28:43.518 "hdgst": false, 00:28:43.518 "ddgst": false, 00:28:43.518 "allow_unrecognized_csi": false, 00:28:43.518 "method": "bdev_nvme_attach_controller", 00:28:43.518 "req_id": 1 00:28:43.518 } 00:28:43.518 Got JSON-RPC error response 00:28:43.518 response: 00:28:43.518 { 00:28:43.518 "code": -114, 00:28:43.518 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:43.518 } 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:43.518 request: 00:28:43.518 { 00:28:43.518 "name": "NVMe0", 00:28:43.518 "trtype": "tcp", 00:28:43.518 "traddr": "10.0.0.2", 00:28:43.518 "adrfam": "ipv4", 00:28:43.518 "trsvcid": "4420", 00:28:43.518 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:43.518 "hostaddr": "10.0.0.1", 00:28:43.518 "prchk_reftag": false, 00:28:43.518 "prchk_guard": false, 00:28:43.518 "hdgst": false, 00:28:43.518 "ddgst": false, 00:28:43.518 "allow_unrecognized_csi": false, 00:28:43.518 "method": "bdev_nvme_attach_controller", 00:28:43.518 "req_id": 1 00:28:43.518 } 00:28:43.518 Got JSON-RPC error response 00:28:43.518 response: 00:28:43.518 { 00:28:43.518 "code": -114, 00:28:43.518 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:43.518 } 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.518 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:43.518 request: 00:28:43.518 { 00:28:43.518 "name": "NVMe0", 00:28:43.518 "trtype": "tcp", 00:28:43.518 "traddr": "10.0.0.2", 00:28:43.518 "adrfam": "ipv4", 00:28:43.518 "trsvcid": "4420", 00:28:43.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:43.518 "hostaddr": "10.0.0.1", 00:28:43.518 "prchk_reftag": false, 00:28:43.518 "prchk_guard": false, 00:28:43.518 "hdgst": false, 00:28:43.518 "ddgst": false, 00:28:43.518 "multipath": "disable", 00:28:43.518 "allow_unrecognized_csi": false, 00:28:43.518 "method": "bdev_nvme_attach_controller", 00:28:43.518 "req_id": 1 00:28:43.518 } 00:28:43.518 Got JSON-RPC error response 00:28:43.518 response: 00:28:43.518 { 00:28:43.518 "code": -114, 00:28:43.518 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:28:43.518 } 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:43.519 request: 00:28:43.519 { 00:28:43.519 "name": "NVMe0", 00:28:43.519 "trtype": "tcp", 00:28:43.519 "traddr": "10.0.0.2", 00:28:43.519 "adrfam": "ipv4", 00:28:43.519 "trsvcid": "4420", 00:28:43.519 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:43.519 "hostaddr": "10.0.0.1", 00:28:43.519 "prchk_reftag": false, 00:28:43.519 "prchk_guard": false, 00:28:43.519 "hdgst": false, 00:28:43.519 "ddgst": false, 00:28:43.519 "multipath": "failover", 00:28:43.519 "allow_unrecognized_csi": false, 00:28:43.519 "method": "bdev_nvme_attach_controller", 00:28:43.519 "req_id": 1 00:28:43.519 } 00:28:43.519 Got JSON-RPC error response 00:28:43.519 response: 00:28:43.519 { 00:28:43.519 "code": -114, 00:28:43.519 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:43.519 } 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:43.519 NVMe0n1 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.519 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:43.780 00:28:43.780 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.780 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:43.780 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:43.780 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.780 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:43.780 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.780 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:43.780 14:26:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:44.721 { 00:28:44.721 "results": [ 00:28:44.721 { 00:28:44.721 "job": "NVMe0n1", 00:28:44.721 "core_mask": "0x1", 00:28:44.721 "workload": "write", 00:28:44.721 "status": "finished", 00:28:44.721 "queue_depth": 128, 00:28:44.721 "io_size": 4096, 00:28:44.721 "runtime": 1.006433, 00:28:44.721 "iops": 25621.178955777483, 00:28:44.721 "mibps": 100.08273029600579, 00:28:44.721 "io_failed": 0, 00:28:44.721 "io_timeout": 0, 00:28:44.721 "avg_latency_us": 4983.650398407405, 00:28:44.721 "min_latency_us": 2389.3333333333335, 00:28:44.721 "max_latency_us": 16384.0 00:28:44.721 } 00:28:44.721 ], 00:28:44.721 "core_count": 1 00:28:44.721 } 00:28:44.983 14:26:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:44.983 14:26:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.983 14:26:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:44.983 14:26:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.983 14:26:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:28:44.983 14:26:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3515179 00:28:44.983 14:26:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3515179 ']' 00:28:44.983 14:26:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3515179 00:28:44.983 14:26:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:28:44.983 14:26:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:44.983 14:26:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3515179 00:28:44.983 14:26:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:44.983 14:26:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:44.983 14:26:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3515179' 00:28:44.983 killing process with pid 3515179 00:28:44.983 14:26:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3515179 00:28:44.983 14:26:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3515179 00:28:44.983 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:44.983 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.983 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:44.983 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.983 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:44.983 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.983 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:44.983 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.983 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:28:44.983 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:44.983 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:28:44.983 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:44.983 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:28:44.983 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:28:44.983 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:44.983 [2024-11-25 14:26:47.475760] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:28:44.983 [2024-11-25 14:26:47.475844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3515179 ] 00:28:44.983 [2024-11-25 14:26:47.567521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.983 [2024-11-25 14:26:47.621502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.983 [2024-11-25 14:26:48.673338] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name 2d57bcdc-69f4-453a-80b3-fc51363a0c35 already exists 00:28:44.983 [2024-11-25 14:26:48.673387] bdev.c:7832:bdev_register: *ERROR*: Unable to add uuid:2d57bcdc-69f4-453a-80b3-fc51363a0c35 alias for bdev NVMe1n1 00:28:44.983 [2024-11-25 14:26:48.673398] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:44.983 Running I/O for 1 seconds... 00:28:44.983 25591.00 IOPS, 99.96 MiB/s 00:28:44.983 Latency(us) 00:28:44.983 [2024-11-25T13:26:50.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.983 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:44.983 NVMe0n1 : 1.01 25621.18 100.08 0.00 0.00 4983.65 2389.33 16384.00 00:28:44.983 [2024-11-25T13:26:50.073Z] =================================================================================================================== 00:28:44.983 [2024-11-25T13:26:50.073Z] Total : 25621.18 100.08 0.00 0.00 4983.65 2389.33 16384.00 00:28:44.983 Received shutdown signal, test time was about 1.000000 seconds 00:28:44.983 00:28:44.983 Latency(us) 00:28:44.983 [2024-11-25T13:26:50.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.983 [2024-11-25T13:26:50.073Z] =================================================================================================================== 00:28:44.983 [2024-11-25T13:26:50.073Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:44.983 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:44.983 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:45.245 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:28:45.245 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:28:45.245 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:45.245 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:28:45.245 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:45.245 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:28:45.245 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:45.245 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:45.245 rmmod nvme_tcp 00:28:45.245 rmmod nvme_fabrics 00:28:45.245 rmmod nvme_keyring 00:28:45.245 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:45.245 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:28:45.245 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:28:45.245 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3515066 ']' 00:28:45.245 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3515066 00:28:45.245 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3515066 ']' 00:28:45.245 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3515066 00:28:45.245 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:28:45.245 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:45.245 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3515066 00:28:45.245 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:45.245 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:45.245 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3515066' 00:28:45.245 killing process with pid 3515066 00:28:45.245 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3515066 00:28:45.245 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3515066 00:28:45.505 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:45.505 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:45.505 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:45.505 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:28:45.505 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:28:45.505 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:45.506 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:28:45.506 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:45.506 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:45.506 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.506 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:45.506 14:26:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.421 14:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:47.421 00:28:47.421 real 0m13.900s 00:28:47.421 user 0m16.609s 00:28:47.421 sys 0m6.547s 00:28:47.421 14:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:47.421 14:26:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:47.421 ************************************ 00:28:47.421 END TEST nvmf_multicontroller 00:28:47.421 ************************************ 00:28:47.421 14:26:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:47.421 14:26:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:47.421 14:26:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:47.421 14:26:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.683 ************************************ 00:28:47.683 START TEST nvmf_aer 00:28:47.683 ************************************ 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:47.683 * Looking for test storage... 00:28:47.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:47.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.683 --rc genhtml_branch_coverage=1 00:28:47.683 --rc genhtml_function_coverage=1 00:28:47.683 --rc genhtml_legend=1 00:28:47.683 --rc geninfo_all_blocks=1 00:28:47.683 --rc geninfo_unexecuted_blocks=1 00:28:47.683 00:28:47.683 ' 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:47.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.683 --rc genhtml_branch_coverage=1 00:28:47.683 --rc genhtml_function_coverage=1 00:28:47.683 --rc genhtml_legend=1 00:28:47.683 --rc geninfo_all_blocks=1 00:28:47.683 --rc geninfo_unexecuted_blocks=1 00:28:47.683 00:28:47.683 ' 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:47.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.683 --rc genhtml_branch_coverage=1 00:28:47.683 --rc genhtml_function_coverage=1 00:28:47.683 --rc genhtml_legend=1 00:28:47.683 --rc geninfo_all_blocks=1 00:28:47.683 --rc geninfo_unexecuted_blocks=1 00:28:47.683 00:28:47.683 ' 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:47.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:47.683 --rc genhtml_branch_coverage=1 00:28:47.683 --rc genhtml_function_coverage=1 00:28:47.683 --rc genhtml_legend=1 00:28:47.683 --rc geninfo_all_blocks=1 00:28:47.683 --rc geninfo_unexecuted_blocks=1 00:28:47.683 00:28:47.683 ' 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:47.683 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:47.684 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:47.684 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:47.684 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:47.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:47.684 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:47.684 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:47.684 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:47.945 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:47.945 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:47.945 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:47.945 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:47.945 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:47.946 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:47.946 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.946 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:47.946 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.946 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:47.946 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:47.946 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:28:47.946 14:26:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:56.089 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:56.089 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:56.089 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:56.089 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.090 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:56.090 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:56.090 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.090 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:56.090 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:28:56.090 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:56.090 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:56.090 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:56.090 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:56.090 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:56.090 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:56.090 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:56.090 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:56.090 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:56.090 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:56.090 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:56.090 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:56.090 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:56.090 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:56.090 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:56.090 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:56.090 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:56.090 14:26:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:56.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:56.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.549 ms 00:28:56.090 00:28:56.090 --- 10.0.0.2 ping statistics --- 00:28:56.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.090 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:56.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:56.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:28:56.090 00:28:56.090 --- 10.0.0.1 ping statistics --- 00:28:56.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.090 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3519976 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3519976 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3519976 ']' 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:56.090 14:27:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:56.090 [2024-11-25 14:27:00.349469] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:28:56.090 [2024-11-25 14:27:00.349536] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:56.090 [2024-11-25 14:27:00.449573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:56.090 [2024-11-25 14:27:00.503747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:56.090 [2024-11-25 14:27:00.503802] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:56.090 [2024-11-25 14:27:00.503812] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:56.090 [2024-11-25 14:27:00.503820] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:56.090 [2024-11-25 14:27:00.503827] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:56.090 [2024-11-25 14:27:00.505990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.090 [2024-11-25 14:27:00.506154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:56.090 [2024-11-25 14:27:00.506317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:56.090 [2024-11-25 14:27:00.506412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.090 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:56.090 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:28:56.090 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:56.090 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:56.090 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:56.353 [2024-11-25 14:27:01.227565] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:56.353 Malloc0 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:56.353 [2024-11-25 14:27:01.301347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:56.353 [ 00:28:56.353 { 00:28:56.353 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:56.353 "subtype": "Discovery", 00:28:56.353 "listen_addresses": [], 00:28:56.353 "allow_any_host": true, 00:28:56.353 "hosts": [] 00:28:56.353 }, 00:28:56.353 { 00:28:56.353 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:56.353 "subtype": "NVMe", 00:28:56.353 "listen_addresses": [ 00:28:56.353 { 00:28:56.353 "trtype": "TCP", 00:28:56.353 "adrfam": "IPv4", 00:28:56.353 "traddr": "10.0.0.2", 00:28:56.353 "trsvcid": "4420" 00:28:56.353 } 00:28:56.353 ], 00:28:56.353 "allow_any_host": true, 00:28:56.353 "hosts": [], 00:28:56.353 "serial_number": "SPDK00000000000001", 00:28:56.353 "model_number": "SPDK bdev Controller", 00:28:56.353 "max_namespaces": 2, 00:28:56.353 "min_cntlid": 1, 00:28:56.353 "max_cntlid": 65519, 00:28:56.353 "namespaces": [ 00:28:56.353 { 00:28:56.353 "nsid": 1, 00:28:56.353 "bdev_name": "Malloc0", 00:28:56.353 "name": "Malloc0", 00:28:56.353 "nguid": "43CFEEFF719A4616A70DEB30F674BD0E", 00:28:56.353 "uuid": "43cfeeff-719a-4616-a70d-eb30f674bd0e" 00:28:56.353 } 00:28:56.353 ] 00:28:56.353 } 00:28:56.353 ] 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3520217 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:28:56.353 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:28:56.615 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:56.615 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:56.615 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:28:56.615 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:56.615 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.615 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:56.615 Malloc1 00:28:56.615 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.615 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:56.615 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.615 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:56.615 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.615 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:56.615 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.615 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:56.615 Asynchronous Event Request test 00:28:56.615 Attaching to 10.0.0.2 00:28:56.615 Attached to 10.0.0.2 00:28:56.615 Registering asynchronous event callbacks... 00:28:56.615 Starting namespace attribute notice tests for all controllers... 00:28:56.615 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:56.615 aer_cb - Changed Namespace 00:28:56.615 Cleaning up... 00:28:56.615 [ 00:28:56.615 { 00:28:56.615 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:56.615 "subtype": "Discovery", 00:28:56.615 "listen_addresses": [], 00:28:56.615 "allow_any_host": true, 00:28:56.615 "hosts": [] 00:28:56.615 }, 00:28:56.615 { 00:28:56.615 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:56.615 "subtype": "NVMe", 00:28:56.615 "listen_addresses": [ 00:28:56.615 { 00:28:56.615 "trtype": "TCP", 00:28:56.615 "adrfam": "IPv4", 00:28:56.615 "traddr": "10.0.0.2", 00:28:56.615 "trsvcid": "4420" 00:28:56.615 } 00:28:56.615 ], 00:28:56.616 "allow_any_host": true, 00:28:56.616 "hosts": [], 00:28:56.616 "serial_number": "SPDK00000000000001", 00:28:56.616 "model_number": "SPDK bdev Controller", 00:28:56.616 "max_namespaces": 2, 00:28:56.616 "min_cntlid": 1, 00:28:56.616 "max_cntlid": 65519, 00:28:56.616 "namespaces": [ 00:28:56.616 { 00:28:56.616 "nsid": 1, 00:28:56.616 "bdev_name": "Malloc0", 00:28:56.616 "name": "Malloc0", 00:28:56.616 "nguid": "43CFEEFF719A4616A70DEB30F674BD0E", 00:28:56.616 "uuid": "43cfeeff-719a-4616-a70d-eb30f674bd0e" 00:28:56.616 }, 00:28:56.616 { 00:28:56.616 "nsid": 2, 00:28:56.616 "bdev_name": "Malloc1", 00:28:56.616 "name": "Malloc1", 00:28:56.616 "nguid": "4692788238524E6994732A316D832819", 00:28:56.616 "uuid": "46927882-3852-4e69-9473-2a316d832819" 00:28:56.616 } 00:28:56.616 ] 00:28:56.616 } 00:28:56.616 ] 00:28:56.616 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.616 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3520217 00:28:56.616 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:56.616 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.616 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:56.616 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.616 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:56.616 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.616 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:56.616 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.616 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:56.616 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.616 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:56.616 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.616 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:56.616 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:56.616 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:56.616 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:28:56.616 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:56.616 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:28:56.616 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:56.616 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:56.616 rmmod nvme_tcp 00:28:56.616 rmmod nvme_fabrics 00:28:56.877 rmmod nvme_keyring 00:28:56.877 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:56.877 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:28:56.877 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:28:56.877 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3519976 ']' 00:28:56.877 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3519976 00:28:56.877 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3519976 ']' 00:28:56.877 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3519976 00:28:56.877 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:28:56.877 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:56.877 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3519976 00:28:56.877 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:56.877 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:56.877 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3519976' 00:28:56.877 killing process with pid 3519976 00:28:56.877 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3519976 00:28:56.877 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3519976 00:28:57.139 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:57.139 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:57.139 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:57.139 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:28:57.139 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:28:57.139 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:57.139 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:28:57.139 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:57.139 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:57.139 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.139 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:57.139 14:27:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.054 14:27:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:59.054 00:28:59.054 real 0m11.527s 00:28:59.054 user 0m8.026s 00:28:59.054 sys 0m6.228s 00:28:59.054 14:27:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:59.054 14:27:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:59.054 ************************************ 00:28:59.054 END TEST nvmf_aer 00:28:59.054 ************************************ 00:28:59.054 14:27:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:59.054 14:27:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:59.054 14:27:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:59.054 14:27:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.316 ************************************ 00:28:59.316 START TEST nvmf_async_init 00:28:59.316 ************************************ 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:59.316 * Looking for test storage... 00:28:59.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:28:59.316 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:59.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.317 --rc genhtml_branch_coverage=1 00:28:59.317 --rc genhtml_function_coverage=1 00:28:59.317 --rc genhtml_legend=1 00:28:59.317 --rc geninfo_all_blocks=1 00:28:59.317 --rc geninfo_unexecuted_blocks=1 00:28:59.317 00:28:59.317 ' 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:59.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.317 --rc genhtml_branch_coverage=1 00:28:59.317 --rc genhtml_function_coverage=1 00:28:59.317 --rc genhtml_legend=1 00:28:59.317 --rc geninfo_all_blocks=1 00:28:59.317 --rc geninfo_unexecuted_blocks=1 00:28:59.317 00:28:59.317 ' 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:59.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.317 --rc genhtml_branch_coverage=1 00:28:59.317 --rc genhtml_function_coverage=1 00:28:59.317 --rc genhtml_legend=1 00:28:59.317 --rc geninfo_all_blocks=1 00:28:59.317 --rc geninfo_unexecuted_blocks=1 00:28:59.317 00:28:59.317 ' 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:59.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.317 --rc genhtml_branch_coverage=1 00:28:59.317 --rc genhtml_function_coverage=1 00:28:59.317 --rc genhtml_legend=1 00:28:59.317 --rc geninfo_all_blocks=1 00:28:59.317 --rc geninfo_unexecuted_blocks=1 00:28:59.317 00:28:59.317 ' 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:59.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=7949734efcb14190a5fc8509d79ac525 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.317 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.578 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:59.578 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:59.578 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:28:59.578 14:27:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:07.724 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:07.724 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:07.724 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:07.724 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:07.724 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:07.724 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:07.724 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:07.724 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:07.724 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:07.724 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:07.724 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:07.724 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:07.724 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:07.724 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:07.724 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:07.724 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:07.724 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:07.724 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:07.724 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:07.724 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:07.724 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:07.724 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:07.724 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:07.724 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.724 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.724 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:07.725 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:07.725 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:07.725 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:07.725 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:07.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:07.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:29:07.725 00:29:07.725 --- 10.0.0.2 ping statistics --- 00:29:07.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.725 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:07.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:07.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:29:07.725 00:29:07.725 --- 10.0.0.1 ping statistics --- 00:29:07.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.725 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:07.725 14:27:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:07.725 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:07.725 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:07.725 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:07.725 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:07.725 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3525028 00:29:07.725 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3525028 00:29:07.725 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:07.725 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3525028 ']' 00:29:07.725 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.725 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.725 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.725 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.725 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:07.725 [2024-11-25 14:27:12.086414] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:29:07.725 [2024-11-25 14:27:12.086484] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.725 [2024-11-25 14:27:12.187376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.725 [2024-11-25 14:27:12.238363] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.725 [2024-11-25 14:27:12.238414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.725 [2024-11-25 14:27:12.238423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.725 [2024-11-25 14:27:12.238430] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.726 [2024-11-25 14:27:12.238436] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.726 [2024-11-25 14:27:12.239201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.987 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:07.987 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:29:07.987 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:07.987 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:07.987 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:07.987 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:07.987 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:07.987 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.987 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:07.987 [2024-11-25 14:27:12.948764] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.987 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.987 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:07.987 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.987 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:07.987 null0 00:29:07.987 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.987 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:07.987 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.987 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:07.987 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.987 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:07.987 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.987 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:07.987 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.987 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 7949734efcb14190a5fc8509d79ac525 00:29:07.987 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.987 14:27:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:07.987 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.987 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:07.987 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.987 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:07.987 [2024-11-25 14:27:13.009142] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.987 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.987 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:07.987 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.987 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.248 nvme0n1 00:29:08.248 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.248 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:08.248 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.248 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.248 [ 00:29:08.248 { 00:29:08.249 "name": "nvme0n1", 00:29:08.249 "aliases": [ 00:29:08.249 "7949734e-fcb1-4190-a5fc-8509d79ac525" 00:29:08.249 ], 00:29:08.249 "product_name": "NVMe disk", 00:29:08.249 "block_size": 512, 00:29:08.249 "num_blocks": 2097152, 00:29:08.249 "uuid": "7949734e-fcb1-4190-a5fc-8509d79ac525", 00:29:08.249 "numa_id": 0, 00:29:08.249 "assigned_rate_limits": { 00:29:08.249 "rw_ios_per_sec": 0, 00:29:08.249 "rw_mbytes_per_sec": 0, 00:29:08.249 "r_mbytes_per_sec": 0, 00:29:08.249 "w_mbytes_per_sec": 0 00:29:08.249 }, 00:29:08.249 "claimed": false, 00:29:08.249 "zoned": false, 00:29:08.249 "supported_io_types": { 00:29:08.249 "read": true, 00:29:08.249 "write": true, 00:29:08.249 "unmap": false, 00:29:08.249 "flush": true, 00:29:08.249 "reset": true, 00:29:08.249 "nvme_admin": true, 00:29:08.249 "nvme_io": true, 00:29:08.249 "nvme_io_md": false, 00:29:08.249 "write_zeroes": true, 00:29:08.249 "zcopy": false, 00:29:08.249 "get_zone_info": false, 00:29:08.249 "zone_management": false, 00:29:08.249 "zone_append": false, 00:29:08.249 "compare": true, 00:29:08.249 "compare_and_write": true, 00:29:08.249 "abort": true, 00:29:08.249 "seek_hole": false, 00:29:08.249 "seek_data": false, 00:29:08.249 "copy": true, 00:29:08.249 "nvme_iov_md": false 00:29:08.249 }, 00:29:08.249 "memory_domains": [ 00:29:08.249 { 00:29:08.249 "dma_device_id": "system", 00:29:08.249 "dma_device_type": 1 00:29:08.249 } 00:29:08.249 ], 00:29:08.249 "driver_specific": { 00:29:08.249 "nvme": [ 00:29:08.249 { 00:29:08.249 "trid": { 00:29:08.249 "trtype": "TCP", 00:29:08.249 "adrfam": "IPv4", 00:29:08.249 "traddr": "10.0.0.2", 00:29:08.249 "trsvcid": "4420", 00:29:08.249 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:08.249 }, 00:29:08.249 "ctrlr_data": { 00:29:08.249 "cntlid": 1, 00:29:08.249 "vendor_id": "0x8086", 00:29:08.249 "model_number": "SPDK bdev Controller", 00:29:08.249 "serial_number": "00000000000000000000", 00:29:08.249 "firmware_revision": "25.01", 00:29:08.249 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:08.249 "oacs": { 00:29:08.249 "security": 0, 00:29:08.249 "format": 0, 00:29:08.249 "firmware": 0, 00:29:08.249 "ns_manage": 0 00:29:08.249 }, 00:29:08.249 "multi_ctrlr": true, 00:29:08.249 "ana_reporting": false 00:29:08.249 }, 00:29:08.249 "vs": { 00:29:08.249 "nvme_version": "1.3" 00:29:08.249 }, 00:29:08.249 "ns_data": { 00:29:08.249 "id": 1, 00:29:08.249 "can_share": true 00:29:08.249 } 00:29:08.249 } 00:29:08.249 ], 00:29:08.249 "mp_policy": "active_passive" 00:29:08.249 } 00:29:08.249 } 00:29:08.249 ] 00:29:08.249 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.249 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:08.249 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.249 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.249 [2024-11-25 14:27:13.285648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:08.249 [2024-11-25 14:27:13.285737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe19610 (9): Bad file descriptor 00:29:08.511 [2024-11-25 14:27:13.417265] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:08.511 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.511 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:08.511 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.511 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.511 [ 00:29:08.511 { 00:29:08.511 "name": "nvme0n1", 00:29:08.511 "aliases": [ 00:29:08.511 "7949734e-fcb1-4190-a5fc-8509d79ac525" 00:29:08.511 ], 00:29:08.511 "product_name": "NVMe disk", 00:29:08.511 "block_size": 512, 00:29:08.511 "num_blocks": 2097152, 00:29:08.511 "uuid": "7949734e-fcb1-4190-a5fc-8509d79ac525", 00:29:08.511 "numa_id": 0, 00:29:08.511 "assigned_rate_limits": { 00:29:08.511 "rw_ios_per_sec": 0, 00:29:08.511 "rw_mbytes_per_sec": 0, 00:29:08.511 "r_mbytes_per_sec": 0, 00:29:08.511 "w_mbytes_per_sec": 0 00:29:08.511 }, 00:29:08.511 "claimed": false, 00:29:08.511 "zoned": false, 00:29:08.511 "supported_io_types": { 00:29:08.511 "read": true, 00:29:08.511 "write": true, 00:29:08.511 "unmap": false, 00:29:08.511 "flush": true, 00:29:08.511 "reset": true, 00:29:08.511 "nvme_admin": true, 00:29:08.511 "nvme_io": true, 00:29:08.511 "nvme_io_md": false, 00:29:08.511 "write_zeroes": true, 00:29:08.511 "zcopy": false, 00:29:08.511 "get_zone_info": false, 00:29:08.511 "zone_management": false, 00:29:08.511 "zone_append": false, 00:29:08.511 "compare": true, 00:29:08.511 "compare_and_write": true, 00:29:08.511 "abort": true, 00:29:08.511 "seek_hole": false, 00:29:08.511 "seek_data": false, 00:29:08.511 "copy": true, 00:29:08.511 "nvme_iov_md": false 00:29:08.511 }, 00:29:08.511 "memory_domains": [ 00:29:08.511 { 00:29:08.511 "dma_device_id": "system", 00:29:08.511 "dma_device_type": 1 00:29:08.511 } 00:29:08.511 ], 00:29:08.511 "driver_specific": { 00:29:08.511 "nvme": [ 00:29:08.511 { 00:29:08.511 "trid": { 00:29:08.511 "trtype": "TCP", 00:29:08.511 "adrfam": "IPv4", 00:29:08.511 "traddr": "10.0.0.2", 00:29:08.511 "trsvcid": "4420", 00:29:08.511 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:08.511 }, 00:29:08.511 "ctrlr_data": { 00:29:08.511 "cntlid": 2, 00:29:08.511 "vendor_id": "0x8086", 00:29:08.511 "model_number": "SPDK bdev Controller", 00:29:08.511 "serial_number": "00000000000000000000", 00:29:08.511 "firmware_revision": "25.01", 00:29:08.511 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:08.511 "oacs": { 00:29:08.511 "security": 0, 00:29:08.511 "format": 0, 00:29:08.511 "firmware": 0, 00:29:08.511 "ns_manage": 0 00:29:08.511 }, 00:29:08.511 "multi_ctrlr": true, 00:29:08.511 "ana_reporting": false 00:29:08.511 }, 00:29:08.511 "vs": { 00:29:08.511 "nvme_version": "1.3" 00:29:08.511 }, 00:29:08.511 "ns_data": { 00:29:08.511 "id": 1, 00:29:08.511 "can_share": true 00:29:08.511 } 00:29:08.511 } 00:29:08.511 ], 00:29:08.511 "mp_policy": "active_passive" 00:29:08.511 } 00:29:08.511 } 00:29:08.511 ] 00:29:08.511 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.511 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.511 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.511 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.l64so16PgU 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.l64so16PgU 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.l64so16PgU 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.512 [2024-11-25 14:27:13.506323] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:08.512 [2024-11-25 14:27:13.506491] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.512 [2024-11-25 14:27:13.530404] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:08.512 nvme0n1 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.512 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.773 [ 00:29:08.773 { 00:29:08.773 "name": "nvme0n1", 00:29:08.773 "aliases": [ 00:29:08.773 "7949734e-fcb1-4190-a5fc-8509d79ac525" 00:29:08.773 ], 00:29:08.773 "product_name": "NVMe disk", 00:29:08.773 "block_size": 512, 00:29:08.773 "num_blocks": 2097152, 00:29:08.773 "uuid": "7949734e-fcb1-4190-a5fc-8509d79ac525", 00:29:08.773 "numa_id": 0, 00:29:08.773 "assigned_rate_limits": { 00:29:08.773 "rw_ios_per_sec": 0, 00:29:08.773 "rw_mbytes_per_sec": 0, 00:29:08.773 "r_mbytes_per_sec": 0, 00:29:08.773 "w_mbytes_per_sec": 0 00:29:08.773 }, 00:29:08.773 "claimed": false, 00:29:08.773 "zoned": false, 00:29:08.773 "supported_io_types": { 00:29:08.773 "read": true, 00:29:08.773 "write": true, 00:29:08.773 "unmap": false, 00:29:08.773 "flush": true, 00:29:08.773 "reset": true, 00:29:08.773 "nvme_admin": true, 00:29:08.773 "nvme_io": true, 00:29:08.773 "nvme_io_md": false, 00:29:08.773 "write_zeroes": true, 00:29:08.773 "zcopy": false, 00:29:08.773 "get_zone_info": false, 00:29:08.773 "zone_management": false, 00:29:08.773 "zone_append": false, 00:29:08.773 "compare": true, 00:29:08.773 "compare_and_write": true, 00:29:08.773 "abort": true, 00:29:08.773 "seek_hole": false, 00:29:08.773 "seek_data": false, 00:29:08.773 "copy": true, 00:29:08.773 "nvme_iov_md": false 00:29:08.773 }, 00:29:08.773 "memory_domains": [ 00:29:08.773 { 00:29:08.773 "dma_device_id": "system", 00:29:08.773 "dma_device_type": 1 00:29:08.773 } 00:29:08.773 ], 00:29:08.773 "driver_specific": { 00:29:08.773 "nvme": [ 00:29:08.773 { 00:29:08.773 "trid": { 00:29:08.773 "trtype": "TCP", 00:29:08.773 "adrfam": "IPv4", 00:29:08.773 "traddr": "10.0.0.2", 00:29:08.773 "trsvcid": "4421", 00:29:08.773 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:08.773 }, 00:29:08.773 "ctrlr_data": { 00:29:08.773 "cntlid": 3, 00:29:08.773 "vendor_id": "0x8086", 00:29:08.773 "model_number": "SPDK bdev Controller", 00:29:08.773 "serial_number": "00000000000000000000", 00:29:08.773 "firmware_revision": "25.01", 00:29:08.773 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:08.773 "oacs": { 00:29:08.773 "security": 0, 00:29:08.773 "format": 0, 00:29:08.773 "firmware": 0, 00:29:08.773 "ns_manage": 0 00:29:08.773 }, 00:29:08.773 "multi_ctrlr": true, 00:29:08.773 "ana_reporting": false 00:29:08.773 }, 00:29:08.773 "vs": { 00:29:08.773 "nvme_version": "1.3" 00:29:08.773 }, 00:29:08.773 "ns_data": { 00:29:08.773 "id": 1, 00:29:08.773 "can_share": true 00:29:08.773 } 00:29:08.773 } 00:29:08.773 ], 00:29:08.773 "mp_policy": "active_passive" 00:29:08.773 } 00:29:08.773 } 00:29:08.773 ] 00:29:08.773 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.773 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.773 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.773 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:08.773 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.773 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.l64so16PgU 00:29:08.773 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:08.773 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:08.773 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:08.773 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:08.773 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:08.774 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:08.774 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:08.774 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:08.774 rmmod nvme_tcp 00:29:08.774 rmmod nvme_fabrics 00:29:08.774 rmmod nvme_keyring 00:29:08.774 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:08.774 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:08.774 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:08.774 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3525028 ']' 00:29:08.774 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3525028 00:29:08.774 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3525028 ']' 00:29:08.774 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3525028 00:29:08.774 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:08.774 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:08.774 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3525028 00:29:08.774 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:08.774 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:08.774 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3525028' 00:29:08.774 killing process with pid 3525028 00:29:08.774 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3525028 00:29:08.774 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3525028 00:29:09.035 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:09.035 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:09.035 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:09.035 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:09.035 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:29:09.035 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:09.035 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:29:09.035 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:09.035 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:09.035 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.035 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:09.035 14:27:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.950 14:27:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:10.950 00:29:10.950 real 0m11.874s 00:29:10.950 user 0m4.155s 00:29:10.950 sys 0m6.299s 00:29:10.950 14:27:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:10.950 14:27:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:10.950 ************************************ 00:29:10.950 END TEST nvmf_async_init 00:29:10.950 ************************************ 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.210 ************************************ 00:29:11.210 START TEST dma 00:29:11.210 ************************************ 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:11.210 * Looking for test storage... 00:29:11.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:11.210 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:11.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.471 --rc genhtml_branch_coverage=1 00:29:11.471 --rc genhtml_function_coverage=1 00:29:11.471 --rc genhtml_legend=1 00:29:11.471 --rc geninfo_all_blocks=1 00:29:11.471 --rc geninfo_unexecuted_blocks=1 00:29:11.471 00:29:11.471 ' 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:11.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.471 --rc genhtml_branch_coverage=1 00:29:11.471 --rc genhtml_function_coverage=1 00:29:11.471 --rc genhtml_legend=1 00:29:11.471 --rc geninfo_all_blocks=1 00:29:11.471 --rc geninfo_unexecuted_blocks=1 00:29:11.471 00:29:11.471 ' 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:11.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.471 --rc genhtml_branch_coverage=1 00:29:11.471 --rc genhtml_function_coverage=1 00:29:11.471 --rc genhtml_legend=1 00:29:11.471 --rc geninfo_all_blocks=1 00:29:11.471 --rc geninfo_unexecuted_blocks=1 00:29:11.471 00:29:11.471 ' 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:11.471 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.471 --rc genhtml_branch_coverage=1 00:29:11.471 --rc genhtml_function_coverage=1 00:29:11.471 --rc genhtml_legend=1 00:29:11.471 --rc geninfo_all_blocks=1 00:29:11.471 --rc geninfo_unexecuted_blocks=1 00:29:11.471 00:29:11.471 ' 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.471 14:27:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.472 14:27:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.472 14:27:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.472 14:27:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:11.472 14:27:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.472 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:11.472 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:11.472 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:11.472 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:11.472 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:11.472 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:11.472 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:11.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:11.472 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:11.472 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:11.472 14:27:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:11.472 14:27:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:11.472 14:27:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:11.472 00:29:11.472 real 0m0.236s 00:29:11.472 user 0m0.135s 00:29:11.472 sys 0m0.118s 00:29:11.472 14:27:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:11.472 14:27:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:11.472 ************************************ 00:29:11.472 END TEST dma 00:29:11.472 ************************************ 00:29:11.472 14:27:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:11.472 14:27:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:11.472 14:27:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:11.472 14:27:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.472 ************************************ 00:29:11.472 START TEST nvmf_identify 00:29:11.472 ************************************ 00:29:11.472 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:11.472 * Looking for test storage... 00:29:11.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:11.472 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:11.472 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:29:11.472 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:11.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.734 --rc genhtml_branch_coverage=1 00:29:11.734 --rc genhtml_function_coverage=1 00:29:11.734 --rc genhtml_legend=1 00:29:11.734 --rc geninfo_all_blocks=1 00:29:11.734 --rc geninfo_unexecuted_blocks=1 00:29:11.734 00:29:11.734 ' 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:11.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.734 --rc genhtml_branch_coverage=1 00:29:11.734 --rc genhtml_function_coverage=1 00:29:11.734 --rc genhtml_legend=1 00:29:11.734 --rc geninfo_all_blocks=1 00:29:11.734 --rc geninfo_unexecuted_blocks=1 00:29:11.734 00:29:11.734 ' 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:11.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.734 --rc genhtml_branch_coverage=1 00:29:11.734 --rc genhtml_function_coverage=1 00:29:11.734 --rc genhtml_legend=1 00:29:11.734 --rc geninfo_all_blocks=1 00:29:11.734 --rc geninfo_unexecuted_blocks=1 00:29:11.734 00:29:11.734 ' 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:11.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.734 --rc genhtml_branch_coverage=1 00:29:11.734 --rc genhtml_function_coverage=1 00:29:11.734 --rc genhtml_legend=1 00:29:11.734 --rc geninfo_all_blocks=1 00:29:11.734 --rc geninfo_unexecuted_blocks=1 00:29:11.734 00:29:11.734 ' 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:11.734 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:11.735 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:11.735 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:11.735 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:11.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:11.735 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:11.735 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:11.735 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:11.735 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:11.735 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:11.735 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:11.735 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:11.735 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:11.735 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:11.735 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:11.735 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:11.735 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:11.735 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:11.735 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.735 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:11.735 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:11.735 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:11.735 14:27:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:19.870 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:19.870 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:19.870 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:19.870 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:19.870 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:19.870 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:19.870 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:19.870 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:19.870 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:19.870 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:19.870 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:19.870 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:19.870 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:19.870 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:19.870 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:19.870 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:19.870 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:19.871 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:19.871 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:19.871 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:19.871 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:19.871 14:27:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:19.871 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:19.871 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:19.871 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:19.871 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:19.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:19.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:29:19.871 00:29:19.871 --- 10.0.0.2 ping statistics --- 00:29:19.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.871 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:29:19.871 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:19.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:19.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:29:19.871 00:29:19.871 --- 10.0.0.1 ping statistics --- 00:29:19.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.871 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:29:19.871 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:19.871 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:29:19.871 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:19.871 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:19.871 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:19.871 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:19.871 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:19.871 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:19.871 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:19.871 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:19.871 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:19.871 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:19.871 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3529771 00:29:19.871 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:19.871 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:19.871 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3529771 00:29:19.871 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3529771 ']' 00:29:19.871 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.871 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:19.871 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.871 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:19.872 14:27:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:19.872 [2024-11-25 14:27:24.231287] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:29:19.872 [2024-11-25 14:27:24.231355] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:19.872 [2024-11-25 14:27:24.329930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:19.872 [2024-11-25 14:27:24.383923] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:19.872 [2024-11-25 14:27:24.383977] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:19.872 [2024-11-25 14:27:24.383985] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:19.872 [2024-11-25 14:27:24.383992] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:19.872 [2024-11-25 14:27:24.383999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:19.872 [2024-11-25 14:27:24.386460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.872 [2024-11-25 14:27:24.386623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:19.872 [2024-11-25 14:27:24.386790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:19.872 [2024-11-25 14:27:24.386790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.132 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:20.133 [2024-11-25 14:27:25.067117] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:20.133 Malloc0 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:20.133 [2024-11-25 14:27:25.186231] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.133 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:20.133 [ 00:29:20.133 { 00:29:20.133 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:20.133 "subtype": "Discovery", 00:29:20.133 "listen_addresses": [ 00:29:20.133 { 00:29:20.133 "trtype": "TCP", 00:29:20.133 "adrfam": "IPv4", 00:29:20.133 "traddr": "10.0.0.2", 00:29:20.133 "trsvcid": "4420" 00:29:20.133 } 00:29:20.133 ], 00:29:20.133 "allow_any_host": true, 00:29:20.133 "hosts": [] 00:29:20.133 }, 00:29:20.133 { 00:29:20.133 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:20.133 "subtype": "NVMe", 00:29:20.133 "listen_addresses": [ 00:29:20.133 { 00:29:20.133 "trtype": "TCP", 00:29:20.133 "adrfam": "IPv4", 00:29:20.133 "traddr": "10.0.0.2", 00:29:20.133 "trsvcid": "4420" 00:29:20.133 } 00:29:20.133 ], 00:29:20.133 "allow_any_host": true, 00:29:20.133 "hosts": [], 00:29:20.133 "serial_number": "SPDK00000000000001", 00:29:20.133 "model_number": "SPDK bdev Controller", 00:29:20.133 "max_namespaces": 32, 00:29:20.133 "min_cntlid": 1, 00:29:20.133 "max_cntlid": 65519, 00:29:20.133 "namespaces": [ 00:29:20.133 { 00:29:20.133 "nsid": 1, 00:29:20.133 "bdev_name": "Malloc0", 00:29:20.133 "name": "Malloc0", 00:29:20.133 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:20.133 "eui64": "ABCDEF0123456789", 00:29:20.133 "uuid": "9804e4c2-074f-4ed6-b6c1-5606f04c5cee" 00:29:20.133 } 00:29:20.133 ] 00:29:20.133 } 00:29:20.133 ] 00:29:20.395 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.395 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:20.395 [2024-11-25 14:27:25.249501] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:29:20.395 [2024-11-25 14:27:25.249548] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3529817 ] 00:29:20.395 [2024-11-25 14:27:25.304916] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:29:20.395 [2024-11-25 14:27:25.304988] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:20.395 [2024-11-25 14:27:25.304994] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:20.395 [2024-11-25 14:27:25.305011] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:20.395 [2024-11-25 14:27:25.305023] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:20.395 [2024-11-25 14:27:25.308564] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:29:20.395 [2024-11-25 14:27:25.308615] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1488690 0 00:29:20.395 [2024-11-25 14:27:25.316177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:20.395 [2024-11-25 14:27:25.316195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:20.395 [2024-11-25 14:27:25.316200] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:20.395 [2024-11-25 14:27:25.316204] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:20.395 [2024-11-25 14:27:25.316254] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.395 [2024-11-25 14:27:25.316261] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.395 [2024-11-25 14:27:25.316265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1488690) 00:29:20.395 [2024-11-25 14:27:25.316281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:20.395 [2024-11-25 14:27:25.316306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ea100, cid 0, qid 0 00:29:20.395 [2024-11-25 14:27:25.323172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.395 [2024-11-25 14:27:25.323184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.395 [2024-11-25 14:27:25.323188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.395 [2024-11-25 14:27:25.323192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ea100) on tqpair=0x1488690 00:29:20.395 [2024-11-25 14:27:25.323204] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:20.395 [2024-11-25 14:27:25.323213] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:29:20.395 [2024-11-25 14:27:25.323218] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:29:20.395 [2024-11-25 14:27:25.323239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.395 [2024-11-25 14:27:25.323244] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.395 [2024-11-25 14:27:25.323248] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1488690) 00:29:20.395 [2024-11-25 14:27:25.323257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.395 [2024-11-25 14:27:25.323275] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ea100, cid 0, qid 0 00:29:20.395 [2024-11-25 14:27:25.323518] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.395 [2024-11-25 14:27:25.323526] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.395 [2024-11-25 14:27:25.323529] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.395 [2024-11-25 14:27:25.323533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ea100) on tqpair=0x1488690 00:29:20.395 [2024-11-25 14:27:25.323539] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:29:20.395 [2024-11-25 14:27:25.323547] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:29:20.395 [2024-11-25 14:27:25.323554] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.395 [2024-11-25 14:27:25.323558] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.395 [2024-11-25 14:27:25.323562] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1488690) 00:29:20.395 [2024-11-25 14:27:25.323569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.395 [2024-11-25 14:27:25.323579] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ea100, cid 0, qid 0 00:29:20.395 [2024-11-25 14:27:25.323781] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.395 [2024-11-25 14:27:25.323787] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.395 [2024-11-25 14:27:25.323790] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.395 [2024-11-25 14:27:25.323794] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ea100) on tqpair=0x1488690 00:29:20.395 [2024-11-25 14:27:25.323799] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:29:20.395 [2024-11-25 14:27:25.323808] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:20.395 [2024-11-25 14:27:25.323815] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.395 [2024-11-25 14:27:25.323819] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.395 [2024-11-25 14:27:25.323822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1488690) 00:29:20.395 [2024-11-25 14:27:25.323829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.395 [2024-11-25 14:27:25.323840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ea100, cid 0, qid 0 00:29:20.395 [2024-11-25 14:27:25.324071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.395 [2024-11-25 14:27:25.324077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.395 [2024-11-25 14:27:25.324081] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.395 [2024-11-25 14:27:25.324085] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ea100) on tqpair=0x1488690 00:29:20.396 [2024-11-25 14:27:25.324090] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:20.396 [2024-11-25 14:27:25.324100] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.324104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.324107] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1488690) 00:29:20.396 [2024-11-25 14:27:25.324120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.396 [2024-11-25 14:27:25.324131] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ea100, cid 0, qid 0 00:29:20.396 [2024-11-25 14:27:25.324339] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.396 [2024-11-25 14:27:25.324346] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.396 [2024-11-25 14:27:25.324349] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.324353] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ea100) on tqpair=0x1488690 00:29:20.396 [2024-11-25 14:27:25.324358] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:20.396 [2024-11-25 14:27:25.324364] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:20.396 [2024-11-25 14:27:25.324371] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:20.396 [2024-11-25 14:27:25.324481] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:29:20.396 [2024-11-25 14:27:25.324486] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:20.396 [2024-11-25 14:27:25.324495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.324498] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.324502] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1488690) 00:29:20.396 [2024-11-25 14:27:25.324509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.396 [2024-11-25 14:27:25.324520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ea100, cid 0, qid 0 00:29:20.396 [2024-11-25 14:27:25.324717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.396 [2024-11-25 14:27:25.324723] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.396 [2024-11-25 14:27:25.324727] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.324730] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ea100) on tqpair=0x1488690 00:29:20.396 [2024-11-25 14:27:25.324735] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:20.396 [2024-11-25 14:27:25.324745] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.324749] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.324753] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1488690) 00:29:20.396 [2024-11-25 14:27:25.324760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.396 [2024-11-25 14:27:25.324770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ea100, cid 0, qid 0 00:29:20.396 [2024-11-25 14:27:25.324961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.396 [2024-11-25 14:27:25.324967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.396 [2024-11-25 14:27:25.324970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.324974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ea100) on tqpair=0x1488690 00:29:20.396 [2024-11-25 14:27:25.324979] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:20.396 [2024-11-25 14:27:25.324984] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:20.396 [2024-11-25 14:27:25.324995] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:29:20.396 [2024-11-25 14:27:25.325003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:20.396 [2024-11-25 14:27:25.325013] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.325017] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1488690) 00:29:20.396 [2024-11-25 14:27:25.325024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.396 [2024-11-25 14:27:25.325035] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ea100, cid 0, qid 0 00:29:20.396 [2024-11-25 14:27:25.325295] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:20.396 [2024-11-25 14:27:25.325302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:20.396 [2024-11-25 14:27:25.325306] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.325310] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1488690): datao=0, datal=4096, cccid=0 00:29:20.396 [2024-11-25 14:27:25.325315] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14ea100) on tqpair(0x1488690): expected_datao=0, payload_size=4096 00:29:20.396 [2024-11-25 14:27:25.325320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.325328] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.325333] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.368168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.396 [2024-11-25 14:27:25.368180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.396 [2024-11-25 14:27:25.368183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.368187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ea100) on tqpair=0x1488690 00:29:20.396 [2024-11-25 14:27:25.368198] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:29:20.396 [2024-11-25 14:27:25.368209] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:29:20.396 [2024-11-25 14:27:25.368213] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:29:20.396 [2024-11-25 14:27:25.368219] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:29:20.396 [2024-11-25 14:27:25.368224] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:29:20.396 [2024-11-25 14:27:25.368229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:29:20.396 [2024-11-25 14:27:25.368238] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:20.396 [2024-11-25 14:27:25.368246] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.368250] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.368254] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1488690) 00:29:20.396 [2024-11-25 14:27:25.368262] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:20.396 [2024-11-25 14:27:25.368275] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ea100, cid 0, qid 0 00:29:20.396 [2024-11-25 14:27:25.368467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.396 [2024-11-25 14:27:25.368474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.396 [2024-11-25 14:27:25.368482] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.368486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ea100) on tqpair=0x1488690 00:29:20.396 [2024-11-25 14:27:25.368495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.368499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.368503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1488690) 00:29:20.396 [2024-11-25 14:27:25.368509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.396 [2024-11-25 14:27:25.368516] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.368519] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.368523] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1488690) 00:29:20.396 [2024-11-25 14:27:25.368529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.396 [2024-11-25 14:27:25.368535] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.368539] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.368542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1488690) 00:29:20.396 [2024-11-25 14:27:25.368548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.396 [2024-11-25 14:27:25.368554] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.368558] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.368561] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1488690) 00:29:20.396 [2024-11-25 14:27:25.368567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.396 [2024-11-25 14:27:25.368572] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:20.396 [2024-11-25 14:27:25.368584] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:20.396 [2024-11-25 14:27:25.368591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.396 [2024-11-25 14:27:25.368594] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1488690) 00:29:20.396 [2024-11-25 14:27:25.368601] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.396 [2024-11-25 14:27:25.368614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ea100, cid 0, qid 0 00:29:20.396 [2024-11-25 14:27:25.368619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ea280, cid 1, qid 0 00:29:20.396 [2024-11-25 14:27:25.368624] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ea400, cid 2, qid 0 00:29:20.396 [2024-11-25 14:27:25.368629] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ea580, cid 3, qid 0 00:29:20.396 [2024-11-25 14:27:25.368633] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ea700, cid 4, qid 0 00:29:20.396 [2024-11-25 14:27:25.368868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.396 [2024-11-25 14:27:25.368875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.397 [2024-11-25 14:27:25.368879] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.397 [2024-11-25 14:27:25.368883] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ea700) on tqpair=0x1488690 00:29:20.397 [2024-11-25 14:27:25.368888] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:29:20.397 [2024-11-25 14:27:25.368896] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:29:20.397 [2024-11-25 14:27:25.368909] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.397 [2024-11-25 14:27:25.368913] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1488690) 00:29:20.397 [2024-11-25 14:27:25.368919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.397 [2024-11-25 14:27:25.368930] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ea700, cid 4, qid 0 00:29:20.397 [2024-11-25 14:27:25.369122] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:20.397 [2024-11-25 14:27:25.369128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:20.397 [2024-11-25 14:27:25.369132] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:20.397 [2024-11-25 14:27:25.369136] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1488690): datao=0, datal=4096, cccid=4 00:29:20.397 [2024-11-25 14:27:25.369140] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14ea700) on tqpair(0x1488690): expected_datao=0, payload_size=4096 00:29:20.397 [2024-11-25 14:27:25.369145] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.397 [2024-11-25 14:27:25.369152] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:20.397 [2024-11-25 14:27:25.369156] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:20.397 [2024-11-25 14:27:25.369363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.397 [2024-11-25 14:27:25.369370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.397 [2024-11-25 14:27:25.369373] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.397 [2024-11-25 14:27:25.369377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ea700) on tqpair=0x1488690 00:29:20.397 [2024-11-25 14:27:25.369392] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:29:20.397 [2024-11-25 14:27:25.369421] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.397 [2024-11-25 14:27:25.369425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1488690) 00:29:20.397 [2024-11-25 14:27:25.369432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.397 [2024-11-25 14:27:25.369439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.397 [2024-11-25 14:27:25.369443] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.397 [2024-11-25 14:27:25.369447] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1488690) 00:29:20.397 [2024-11-25 14:27:25.369453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.397 [2024-11-25 14:27:25.369468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ea700, cid 4, qid 0 00:29:20.397 [2024-11-25 14:27:25.369473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ea880, cid 5, qid 0 00:29:20.397 [2024-11-25 14:27:25.369732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:20.397 [2024-11-25 14:27:25.369738] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:20.397 [2024-11-25 14:27:25.369742] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:20.397 [2024-11-25 14:27:25.369745] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1488690): datao=0, datal=1024, cccid=4 00:29:20.397 [2024-11-25 14:27:25.369750] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14ea700) on tqpair(0x1488690): expected_datao=0, payload_size=1024 00:29:20.397 [2024-11-25 14:27:25.369754] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.397 [2024-11-25 14:27:25.369762] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:20.397 [2024-11-25 14:27:25.369765] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:20.397 [2024-11-25 14:27:25.369774] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.397 [2024-11-25 14:27:25.369780] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.397 [2024-11-25 14:27:25.369783] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.397 [2024-11-25 14:27:25.369787] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ea880) on tqpair=0x1488690 00:29:20.397 [2024-11-25 14:27:25.410337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.397 [2024-11-25 14:27:25.410349] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.397 [2024-11-25 14:27:25.410352] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.397 [2024-11-25 14:27:25.410356] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ea700) on tqpair=0x1488690 00:29:20.397 [2024-11-25 14:27:25.410369] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.397 [2024-11-25 14:27:25.410373] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1488690) 00:29:20.397 [2024-11-25 14:27:25.410381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.397 [2024-11-25 14:27:25.410397] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ea700, cid 4, qid 0 00:29:20.397 [2024-11-25 14:27:25.410642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:20.397 [2024-11-25 14:27:25.410649] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:20.397 [2024-11-25 14:27:25.410653] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:20.397 [2024-11-25 14:27:25.410656] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1488690): datao=0, datal=3072, cccid=4 00:29:20.397 [2024-11-25 14:27:25.410661] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14ea700) on tqpair(0x1488690): expected_datao=0, payload_size=3072 00:29:20.397 [2024-11-25 14:27:25.410666] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.397 [2024-11-25 14:27:25.410673] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:20.397 [2024-11-25 14:27:25.410677] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:20.397 [2024-11-25 14:27:25.456174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.397 [2024-11-25 14:27:25.456185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.397 [2024-11-25 14:27:25.456188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.397 [2024-11-25 14:27:25.456192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ea700) on tqpair=0x1488690 00:29:20.397 [2024-11-25 14:27:25.456203] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.397 [2024-11-25 14:27:25.456207] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1488690) 00:29:20.397 [2024-11-25 14:27:25.456214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.397 [2024-11-25 14:27:25.456229] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ea700, cid 4, qid 0 00:29:20.397 [2024-11-25 14:27:25.456383] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:20.397 [2024-11-25 14:27:25.456389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:20.397 [2024-11-25 14:27:25.456393] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:20.397 [2024-11-25 14:27:25.456397] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1488690): datao=0, datal=8, cccid=4 00:29:20.397 [2024-11-25 14:27:25.456401] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14ea700) on tqpair(0x1488690): expected_datao=0, payload_size=8 00:29:20.397 [2024-11-25 14:27:25.456406] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.397 [2024-11-25 14:27:25.456413] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:20.397 [2024-11-25 14:27:25.456416] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:20.661 [2024-11-25 14:27:25.498314] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.661 [2024-11-25 14:27:25.498332] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.661 [2024-11-25 14:27:25.498336] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.661 [2024-11-25 14:27:25.498340] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ea700) on tqpair=0x1488690 00:29:20.661 ===================================================== 00:29:20.661 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:20.661 ===================================================== 00:29:20.661 Controller Capabilities/Features 00:29:20.661 ================================ 00:29:20.661 Vendor ID: 0000 00:29:20.661 Subsystem Vendor ID: 0000 00:29:20.661 Serial Number: .................... 00:29:20.661 Model Number: ........................................ 00:29:20.661 Firmware Version: 25.01 00:29:20.661 Recommended Arb Burst: 0 00:29:20.661 IEEE OUI Identifier: 00 00 00 00:29:20.661 Multi-path I/O 00:29:20.661 May have multiple subsystem ports: No 00:29:20.661 May have multiple controllers: No 00:29:20.661 Associated with SR-IOV VF: No 00:29:20.661 Max Data Transfer Size: 131072 00:29:20.661 Max Number of Namespaces: 0 00:29:20.661 Max Number of I/O Queues: 1024 00:29:20.661 NVMe Specification Version (VS): 1.3 00:29:20.661 NVMe Specification Version (Identify): 1.3 00:29:20.661 Maximum Queue Entries: 128 00:29:20.661 Contiguous Queues Required: Yes 00:29:20.661 Arbitration Mechanisms Supported 00:29:20.661 Weighted Round Robin: Not Supported 00:29:20.661 Vendor Specific: Not Supported 00:29:20.661 Reset Timeout: 15000 ms 00:29:20.661 Doorbell Stride: 4 bytes 00:29:20.661 NVM Subsystem Reset: Not Supported 00:29:20.661 Command Sets Supported 00:29:20.661 NVM Command Set: Supported 00:29:20.661 Boot Partition: Not Supported 00:29:20.661 Memory Page Size Minimum: 4096 bytes 00:29:20.661 Memory Page Size Maximum: 4096 bytes 00:29:20.661 Persistent Memory Region: Not Supported 00:29:20.661 Optional Asynchronous Events Supported 00:29:20.661 Namespace Attribute Notices: Not Supported 00:29:20.661 Firmware Activation Notices: Not Supported 00:29:20.661 ANA Change Notices: Not Supported 00:29:20.661 PLE Aggregate Log Change Notices: Not Supported 00:29:20.661 LBA Status Info Alert Notices: Not Supported 00:29:20.661 EGE Aggregate Log Change Notices: Not Supported 00:29:20.661 Normal NVM Subsystem Shutdown event: Not Supported 00:29:20.661 Zone Descriptor Change Notices: Not Supported 00:29:20.661 Discovery Log Change Notices: Supported 00:29:20.661 Controller Attributes 00:29:20.661 128-bit Host Identifier: Not Supported 00:29:20.661 Non-Operational Permissive Mode: Not Supported 00:29:20.661 NVM Sets: Not Supported 00:29:20.661 Read Recovery Levels: Not Supported 00:29:20.661 Endurance Groups: Not Supported 00:29:20.661 Predictable Latency Mode: Not Supported 00:29:20.661 Traffic Based Keep ALive: Not Supported 00:29:20.661 Namespace Granularity: Not Supported 00:29:20.661 SQ Associations: Not Supported 00:29:20.661 UUID List: Not Supported 00:29:20.661 Multi-Domain Subsystem: Not Supported 00:29:20.661 Fixed Capacity Management: Not Supported 00:29:20.661 Variable Capacity Management: Not Supported 00:29:20.661 Delete Endurance Group: Not Supported 00:29:20.661 Delete NVM Set: Not Supported 00:29:20.661 Extended LBA Formats Supported: Not Supported 00:29:20.661 Flexible Data Placement Supported: Not Supported 00:29:20.661 00:29:20.661 Controller Memory Buffer Support 00:29:20.661 ================================ 00:29:20.661 Supported: No 00:29:20.661 00:29:20.661 Persistent Memory Region Support 00:29:20.661 ================================ 00:29:20.661 Supported: No 00:29:20.661 00:29:20.661 Admin Command Set Attributes 00:29:20.661 ============================ 00:29:20.661 Security Send/Receive: Not Supported 00:29:20.661 Format NVM: Not Supported 00:29:20.661 Firmware Activate/Download: Not Supported 00:29:20.661 Namespace Management: Not Supported 00:29:20.661 Device Self-Test: Not Supported 00:29:20.661 Directives: Not Supported 00:29:20.661 NVMe-MI: Not Supported 00:29:20.661 Virtualization Management: Not Supported 00:29:20.661 Doorbell Buffer Config: Not Supported 00:29:20.661 Get LBA Status Capability: Not Supported 00:29:20.661 Command & Feature Lockdown Capability: Not Supported 00:29:20.661 Abort Command Limit: 1 00:29:20.661 Async Event Request Limit: 4 00:29:20.661 Number of Firmware Slots: N/A 00:29:20.661 Firmware Slot 1 Read-Only: N/A 00:29:20.661 Firmware Activation Without Reset: N/A 00:29:20.661 Multiple Update Detection Support: N/A 00:29:20.661 Firmware Update Granularity: No Information Provided 00:29:20.661 Per-Namespace SMART Log: No 00:29:20.661 Asymmetric Namespace Access Log Page: Not Supported 00:29:20.661 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:20.661 Command Effects Log Page: Not Supported 00:29:20.661 Get Log Page Extended Data: Supported 00:29:20.661 Telemetry Log Pages: Not Supported 00:29:20.661 Persistent Event Log Pages: Not Supported 00:29:20.661 Supported Log Pages Log Page: May Support 00:29:20.661 Commands Supported & Effects Log Page: Not Supported 00:29:20.661 Feature Identifiers & Effects Log Page:May Support 00:29:20.661 NVMe-MI Commands & Effects Log Page: May Support 00:29:20.661 Data Area 4 for Telemetry Log: Not Supported 00:29:20.661 Error Log Page Entries Supported: 128 00:29:20.661 Keep Alive: Not Supported 00:29:20.661 00:29:20.661 NVM Command Set Attributes 00:29:20.661 ========================== 00:29:20.661 Submission Queue Entry Size 00:29:20.661 Max: 1 00:29:20.661 Min: 1 00:29:20.661 Completion Queue Entry Size 00:29:20.661 Max: 1 00:29:20.661 Min: 1 00:29:20.661 Number of Namespaces: 0 00:29:20.661 Compare Command: Not Supported 00:29:20.661 Write Uncorrectable Command: Not Supported 00:29:20.661 Dataset Management Command: Not Supported 00:29:20.661 Write Zeroes Command: Not Supported 00:29:20.661 Set Features Save Field: Not Supported 00:29:20.661 Reservations: Not Supported 00:29:20.661 Timestamp: Not Supported 00:29:20.661 Copy: Not Supported 00:29:20.661 Volatile Write Cache: Not Present 00:29:20.661 Atomic Write Unit (Normal): 1 00:29:20.661 Atomic Write Unit (PFail): 1 00:29:20.661 Atomic Compare & Write Unit: 1 00:29:20.661 Fused Compare & Write: Supported 00:29:20.661 Scatter-Gather List 00:29:20.661 SGL Command Set: Supported 00:29:20.661 SGL Keyed: Supported 00:29:20.661 SGL Bit Bucket Descriptor: Not Supported 00:29:20.661 SGL Metadata Pointer: Not Supported 00:29:20.661 Oversized SGL: Not Supported 00:29:20.661 SGL Metadata Address: Not Supported 00:29:20.661 SGL Offset: Supported 00:29:20.661 Transport SGL Data Block: Not Supported 00:29:20.661 Replay Protected Memory Block: Not Supported 00:29:20.661 00:29:20.661 Firmware Slot Information 00:29:20.661 ========================= 00:29:20.661 Active slot: 0 00:29:20.661 00:29:20.661 00:29:20.661 Error Log 00:29:20.661 ========= 00:29:20.661 00:29:20.662 Active Namespaces 00:29:20.662 ================= 00:29:20.662 Discovery Log Page 00:29:20.662 ================== 00:29:20.662 Generation Counter: 2 00:29:20.662 Number of Records: 2 00:29:20.662 Record Format: 0 00:29:20.662 00:29:20.662 Discovery Log Entry 0 00:29:20.662 ---------------------- 00:29:20.662 Transport Type: 3 (TCP) 00:29:20.662 Address Family: 1 (IPv4) 00:29:20.662 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:20.662 Entry Flags: 00:29:20.662 Duplicate Returned Information: 1 00:29:20.662 Explicit Persistent Connection Support for Discovery: 1 00:29:20.662 Transport Requirements: 00:29:20.662 Secure Channel: Not Required 00:29:20.662 Port ID: 0 (0x0000) 00:29:20.662 Controller ID: 65535 (0xffff) 00:29:20.662 Admin Max SQ Size: 128 00:29:20.662 Transport Service Identifier: 4420 00:29:20.662 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:20.662 Transport Address: 10.0.0.2 00:29:20.662 Discovery Log Entry 1 00:29:20.662 ---------------------- 00:29:20.662 Transport Type: 3 (TCP) 00:29:20.662 Address Family: 1 (IPv4) 00:29:20.662 Subsystem Type: 2 (NVM Subsystem) 00:29:20.662 Entry Flags: 00:29:20.662 Duplicate Returned Information: 0 00:29:20.662 Explicit Persistent Connection Support for Discovery: 0 00:29:20.662 Transport Requirements: 00:29:20.662 Secure Channel: Not Required 00:29:20.662 Port ID: 0 (0x0000) 00:29:20.662 Controller ID: 65535 (0xffff) 00:29:20.662 Admin Max SQ Size: 128 00:29:20.662 Transport Service Identifier: 4420 00:29:20.662 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:20.662 Transport Address: 10.0.0.2 [2024-11-25 14:27:25.498445] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:29:20.662 [2024-11-25 14:27:25.498458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ea100) on tqpair=0x1488690 00:29:20.662 [2024-11-25 14:27:25.498465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.662 [2024-11-25 14:27:25.498471] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ea280) on tqpair=0x1488690 00:29:20.662 [2024-11-25 14:27:25.498475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.662 [2024-11-25 14:27:25.498481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ea400) on tqpair=0x1488690 00:29:20.662 [2024-11-25 14:27:25.498485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.662 [2024-11-25 14:27:25.498490] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ea580) on tqpair=0x1488690 00:29:20.662 [2024-11-25 14:27:25.498495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.662 [2024-11-25 14:27:25.498508] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.662 [2024-11-25 14:27:25.498512] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.662 [2024-11-25 14:27:25.498516] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1488690) 00:29:20.662 [2024-11-25 14:27:25.498524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.662 [2024-11-25 14:27:25.498540] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ea580, cid 3, qid 0 00:29:20.662 [2024-11-25 14:27:25.498786] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.662 [2024-11-25 14:27:25.498793] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.662 [2024-11-25 14:27:25.498796] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.662 [2024-11-25 14:27:25.498800] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ea580) on tqpair=0x1488690 00:29:20.662 [2024-11-25 14:27:25.498808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.662 [2024-11-25 14:27:25.498811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.662 [2024-11-25 14:27:25.498815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1488690) 00:29:20.662 [2024-11-25 14:27:25.498822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.662 [2024-11-25 14:27:25.498837] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ea580, cid 3, qid 0 00:29:20.662 [2024-11-25 14:27:25.499031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.662 [2024-11-25 14:27:25.499037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.662 [2024-11-25 14:27:25.499041] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.662 [2024-11-25 14:27:25.499045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ea580) on tqpair=0x1488690 00:29:20.662 [2024-11-25 14:27:25.499050] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:29:20.662 [2024-11-25 14:27:25.499054] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:29:20.662 [2024-11-25 14:27:25.499065] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.662 [2024-11-25 14:27:25.499069] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.662 [2024-11-25 14:27:25.499075] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1488690) 00:29:20.662 [2024-11-25 14:27:25.499082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.662 [2024-11-25 14:27:25.499093] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ea580, cid 3, qid 0 00:29:20.662 [2024-11-25 14:27:25.499278] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.662 [2024-11-25 14:27:25.499285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.662 [2024-11-25 14:27:25.499288] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.662 [2024-11-25 14:27:25.499292] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ea580) on tqpair=0x1488690 00:29:20.662 [2024-11-25 14:27:25.499303] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.662 [2024-11-25 14:27:25.499306] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.662 [2024-11-25 14:27:25.499310] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1488690) 00:29:20.662 [2024-11-25 14:27:25.499317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.662 [2024-11-25 14:27:25.499327] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ea580, cid 3, qid 0 00:29:20.662 [2024-11-25 14:27:25.499557] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.662 [2024-11-25 14:27:25.499563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.662 [2024-11-25 14:27:25.499567] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.662 [2024-11-25 14:27:25.499571] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ea580) on tqpair=0x1488690 00:29:20.662 [2024-11-25 14:27:25.499582] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.662 [2024-11-25 14:27:25.499586] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.662 [2024-11-25 14:27:25.499589] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1488690) 00:29:20.662 [2024-11-25 14:27:25.499596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.662 [2024-11-25 14:27:25.499606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ea580, cid 3, qid 0 00:29:20.662 [2024-11-25 14:27:25.499792] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.662 [2024-11-25 14:27:25.499798] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.662 [2024-11-25 14:27:25.499802] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.662 [2024-11-25 14:27:25.499806] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ea580) on tqpair=0x1488690 00:29:20.662 [2024-11-25 14:27:25.499816] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.662 [2024-11-25 14:27:25.499819] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.662 [2024-11-25 14:27:25.499823] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1488690) 00:29:20.662 [2024-11-25 14:27:25.499830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.662 [2024-11-25 14:27:25.499840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ea580, cid 3, qid 0 00:29:20.662 [2024-11-25 14:27:25.500054] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.662 [2024-11-25 14:27:25.500061] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.662 [2024-11-25 14:27:25.500064] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.662 [2024-11-25 14:27:25.500068] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ea580) on tqpair=0x1488690 00:29:20.662 [2024-11-25 14:27:25.500078] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.662 [2024-11-25 14:27:25.500082] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.662 [2024-11-25 14:27:25.500086] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1488690) 00:29:20.662 [2024-11-25 14:27:25.500095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.662 [2024-11-25 14:27:25.500106] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14ea580, cid 3, qid 0 00:29:20.662 [2024-11-25 14:27:25.504170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.662 [2024-11-25 14:27:25.504180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.662 [2024-11-25 14:27:25.504183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.662 [2024-11-25 14:27:25.504187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14ea580) on tqpair=0x1488690 00:29:20.662 [2024-11-25 14:27:25.504195] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:29:20.662 00:29:20.662 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:20.662 [2024-11-25 14:27:25.550732] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:29:20.663 [2024-11-25 14:27:25.550784] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3529937 ] 00:29:20.663 [2024-11-25 14:27:25.615676] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:29:20.663 [2024-11-25 14:27:25.615742] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:20.663 [2024-11-25 14:27:25.615747] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:20.663 [2024-11-25 14:27:25.615769] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:20.663 [2024-11-25 14:27:25.615779] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:20.663 [2024-11-25 14:27:25.616476] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:29:20.663 [2024-11-25 14:27:25.616516] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x11b4690 0 00:29:20.663 [2024-11-25 14:27:25.627175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:20.663 [2024-11-25 14:27:25.627193] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:20.663 [2024-11-25 14:27:25.627199] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:20.663 [2024-11-25 14:27:25.627203] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:20.663 [2024-11-25 14:27:25.627240] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.663 [2024-11-25 14:27:25.627248] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.663 [2024-11-25 14:27:25.627253] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11b4690) 00:29:20.663 [2024-11-25 14:27:25.627269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:20.663 [2024-11-25 14:27:25.627292] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216100, cid 0, qid 0 00:29:20.663 [2024-11-25 14:27:25.635173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.663 [2024-11-25 14:27:25.635185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.663 [2024-11-25 14:27:25.635189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.663 [2024-11-25 14:27:25.635194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216100) on tqpair=0x11b4690 00:29:20.663 [2024-11-25 14:27:25.635206] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:20.663 [2024-11-25 14:27:25.635218] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:29:20.663 [2024-11-25 14:27:25.635224] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:29:20.663 [2024-11-25 14:27:25.635237] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.663 [2024-11-25 14:27:25.635242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.663 [2024-11-25 14:27:25.635245] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11b4690) 00:29:20.663 [2024-11-25 14:27:25.635254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.663 [2024-11-25 14:27:25.635270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216100, cid 0, qid 0 00:29:20.663 [2024-11-25 14:27:25.635484] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.663 [2024-11-25 14:27:25.635490] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.663 [2024-11-25 14:27:25.635494] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.663 [2024-11-25 14:27:25.635498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216100) on tqpair=0x11b4690 00:29:20.663 [2024-11-25 14:27:25.635503] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:29:20.663 [2024-11-25 14:27:25.635510] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:29:20.663 [2024-11-25 14:27:25.635518] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.663 [2024-11-25 14:27:25.635521] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.663 [2024-11-25 14:27:25.635525] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11b4690) 00:29:20.663 [2024-11-25 14:27:25.635532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.663 [2024-11-25 14:27:25.635542] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216100, cid 0, qid 0 00:29:20.663 [2024-11-25 14:27:25.635734] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.663 [2024-11-25 14:27:25.635740] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.663 [2024-11-25 14:27:25.635744] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.663 [2024-11-25 14:27:25.635748] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216100) on tqpair=0x11b4690 00:29:20.663 [2024-11-25 14:27:25.635753] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:29:20.663 [2024-11-25 14:27:25.635761] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:20.663 [2024-11-25 14:27:25.635768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.663 [2024-11-25 14:27:25.635771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.663 [2024-11-25 14:27:25.635775] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11b4690) 00:29:20.663 [2024-11-25 14:27:25.635782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.663 [2024-11-25 14:27:25.635792] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216100, cid 0, qid 0 00:29:20.663 [2024-11-25 14:27:25.635988] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.663 [2024-11-25 14:27:25.635995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.663 [2024-11-25 14:27:25.635998] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.663 [2024-11-25 14:27:25.636002] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216100) on tqpair=0x11b4690 00:29:20.663 [2024-11-25 14:27:25.636007] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:20.663 [2024-11-25 14:27:25.636020] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.663 [2024-11-25 14:27:25.636024] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.663 [2024-11-25 14:27:25.636028] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11b4690) 00:29:20.663 [2024-11-25 14:27:25.636034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.663 [2024-11-25 14:27:25.636045] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216100, cid 0, qid 0 00:29:20.663 [2024-11-25 14:27:25.636221] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.663 [2024-11-25 14:27:25.636230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.663 [2024-11-25 14:27:25.636233] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.663 [2024-11-25 14:27:25.636237] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216100) on tqpair=0x11b4690 00:29:20.663 [2024-11-25 14:27:25.636242] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:20.663 [2024-11-25 14:27:25.636247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:20.663 [2024-11-25 14:27:25.636254] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:20.663 [2024-11-25 14:27:25.636363] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:29:20.663 [2024-11-25 14:27:25.636368] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:20.663 [2024-11-25 14:27:25.636376] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.663 [2024-11-25 14:27:25.636380] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.663 [2024-11-25 14:27:25.636384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11b4690) 00:29:20.663 [2024-11-25 14:27:25.636391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.663 [2024-11-25 14:27:25.636402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216100, cid 0, qid 0 00:29:20.663 [2024-11-25 14:27:25.636624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.663 [2024-11-25 14:27:25.636631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.663 [2024-11-25 14:27:25.636634] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.663 [2024-11-25 14:27:25.636638] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216100) on tqpair=0x11b4690 00:29:20.663 [2024-11-25 14:27:25.636643] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:20.663 [2024-11-25 14:27:25.636653] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.663 [2024-11-25 14:27:25.636657] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.663 [2024-11-25 14:27:25.636660] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11b4690) 00:29:20.663 [2024-11-25 14:27:25.636667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.663 [2024-11-25 14:27:25.636678] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216100, cid 0, qid 0 00:29:20.663 [2024-11-25 14:27:25.636894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.663 [2024-11-25 14:27:25.636901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.663 [2024-11-25 14:27:25.636904] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.663 [2024-11-25 14:27:25.636908] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216100) on tqpair=0x11b4690 00:29:20.663 [2024-11-25 14:27:25.636912] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:20.663 [2024-11-25 14:27:25.636920] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:20.663 [2024-11-25 14:27:25.636928] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:29:20.663 [2024-11-25 14:27:25.636939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:20.663 [2024-11-25 14:27:25.636948] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.663 [2024-11-25 14:27:25.636952] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11b4690) 00:29:20.663 [2024-11-25 14:27:25.636959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.663 [2024-11-25 14:27:25.636970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216100, cid 0, qid 0 00:29:20.663 [2024-11-25 14:27:25.637215] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:20.663 [2024-11-25 14:27:25.637222] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:20.663 [2024-11-25 14:27:25.637226] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.637230] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11b4690): datao=0, datal=4096, cccid=0 00:29:20.664 [2024-11-25 14:27:25.637234] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1216100) on tqpair(0x11b4690): expected_datao=0, payload_size=4096 00:29:20.664 [2024-11-25 14:27:25.637239] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.637258] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.637263] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.637411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.664 [2024-11-25 14:27:25.637418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.664 [2024-11-25 14:27:25.637421] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.637425] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216100) on tqpair=0x11b4690 00:29:20.664 [2024-11-25 14:27:25.637433] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:29:20.664 [2024-11-25 14:27:25.637441] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:29:20.664 [2024-11-25 14:27:25.637445] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:29:20.664 [2024-11-25 14:27:25.637450] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:29:20.664 [2024-11-25 14:27:25.637454] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:29:20.664 [2024-11-25 14:27:25.637459] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:29:20.664 [2024-11-25 14:27:25.637468] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:20.664 [2024-11-25 14:27:25.637474] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.637478] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.637482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11b4690) 00:29:20.664 [2024-11-25 14:27:25.637489] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:20.664 [2024-11-25 14:27:25.637501] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216100, cid 0, qid 0 00:29:20.664 [2024-11-25 14:27:25.637710] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.664 [2024-11-25 14:27:25.637716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.664 [2024-11-25 14:27:25.637720] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.637724] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216100) on tqpair=0x11b4690 00:29:20.664 [2024-11-25 14:27:25.637731] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.637735] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.637738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11b4690) 00:29:20.664 [2024-11-25 14:27:25.637744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.664 [2024-11-25 14:27:25.637751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.637754] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.637758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x11b4690) 00:29:20.664 [2024-11-25 14:27:25.637764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.664 [2024-11-25 14:27:25.637770] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.637773] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.637777] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x11b4690) 00:29:20.664 [2024-11-25 14:27:25.637783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.664 [2024-11-25 14:27:25.637788] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.637792] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.637796] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11b4690) 00:29:20.664 [2024-11-25 14:27:25.637801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.664 [2024-11-25 14:27:25.637806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:20.664 [2024-11-25 14:27:25.637817] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:20.664 [2024-11-25 14:27:25.637824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.637828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11b4690) 00:29:20.664 [2024-11-25 14:27:25.637834] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.664 [2024-11-25 14:27:25.637846] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216100, cid 0, qid 0 00:29:20.664 [2024-11-25 14:27:25.637852] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216280, cid 1, qid 0 00:29:20.664 [2024-11-25 14:27:25.637857] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216400, cid 2, qid 0 00:29:20.664 [2024-11-25 14:27:25.637861] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216580, cid 3, qid 0 00:29:20.664 [2024-11-25 14:27:25.637866] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216700, cid 4, qid 0 00:29:20.664 [2024-11-25 14:27:25.638117] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.664 [2024-11-25 14:27:25.638123] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.664 [2024-11-25 14:27:25.638127] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.638130] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216700) on tqpair=0x11b4690 00:29:20.664 [2024-11-25 14:27:25.638135] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:29:20.664 [2024-11-25 14:27:25.638143] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:20.664 [2024-11-25 14:27:25.638151] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:29:20.664 [2024-11-25 14:27:25.638164] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:20.664 [2024-11-25 14:27:25.638171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.638175] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.638178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11b4690) 00:29:20.664 [2024-11-25 14:27:25.638185] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:20.664 [2024-11-25 14:27:25.638196] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216700, cid 4, qid 0 00:29:20.664 [2024-11-25 14:27:25.638406] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.664 [2024-11-25 14:27:25.638413] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.664 [2024-11-25 14:27:25.638417] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.638420] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216700) on tqpair=0x11b4690 00:29:20.664 [2024-11-25 14:27:25.638486] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:29:20.664 [2024-11-25 14:27:25.638495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:20.664 [2024-11-25 14:27:25.638503] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.638507] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11b4690) 00:29:20.664 [2024-11-25 14:27:25.638513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.664 [2024-11-25 14:27:25.638524] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216700, cid 4, qid 0 00:29:20.664 [2024-11-25 14:27:25.638755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:20.664 [2024-11-25 14:27:25.638762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:20.664 [2024-11-25 14:27:25.638765] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.638769] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11b4690): datao=0, datal=4096, cccid=4 00:29:20.664 [2024-11-25 14:27:25.638774] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1216700) on tqpair(0x11b4690): expected_datao=0, payload_size=4096 00:29:20.664 [2024-11-25 14:27:25.638778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.638792] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.638796] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.683170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.664 [2024-11-25 14:27:25.683186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.664 [2024-11-25 14:27:25.683189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.683193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216700) on tqpair=0x11b4690 00:29:20.664 [2024-11-25 14:27:25.683206] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:29:20.664 [2024-11-25 14:27:25.683227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:29:20.664 [2024-11-25 14:27:25.683241] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:29:20.664 [2024-11-25 14:27:25.683249] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.683254] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11b4690) 00:29:20.664 [2024-11-25 14:27:25.683262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.664 [2024-11-25 14:27:25.683277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216700, cid 4, qid 0 00:29:20.664 [2024-11-25 14:27:25.683469] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:20.664 [2024-11-25 14:27:25.683476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:20.664 [2024-11-25 14:27:25.683480] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.683484] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11b4690): datao=0, datal=4096, cccid=4 00:29:20.664 [2024-11-25 14:27:25.683488] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1216700) on tqpair(0x11b4690): expected_datao=0, payload_size=4096 00:29:20.664 [2024-11-25 14:27:25.683493] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.664 [2024-11-25 14:27:25.683507] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:20.665 [2024-11-25 14:27:25.683511] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:20.665 [2024-11-25 14:27:25.725314] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.665 [2024-11-25 14:27:25.725326] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.665 [2024-11-25 14:27:25.725330] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.665 [2024-11-25 14:27:25.725334] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216700) on tqpair=0x11b4690 00:29:20.665 [2024-11-25 14:27:25.725351] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:20.665 [2024-11-25 14:27:25.725361] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:20.665 [2024-11-25 14:27:25.725370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.665 [2024-11-25 14:27:25.725374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11b4690) 00:29:20.665 [2024-11-25 14:27:25.725382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.665 [2024-11-25 14:27:25.725394] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216700, cid 4, qid 0 00:29:20.665 [2024-11-25 14:27:25.725590] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:20.665 [2024-11-25 14:27:25.725596] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:20.665 [2024-11-25 14:27:25.725600] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:20.665 [2024-11-25 14:27:25.725604] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11b4690): datao=0, datal=4096, cccid=4 00:29:20.665 [2024-11-25 14:27:25.725608] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1216700) on tqpair(0x11b4690): expected_datao=0, payload_size=4096 00:29:20.665 [2024-11-25 14:27:25.725613] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.665 [2024-11-25 14:27:25.725642] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:20.665 [2024-11-25 14:27:25.725646] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:20.928 [2024-11-25 14:27:25.767341] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.928 [2024-11-25 14:27:25.767353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.928 [2024-11-25 14:27:25.767357] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.928 [2024-11-25 14:27:25.767361] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216700) on tqpair=0x11b4690 00:29:20.928 [2024-11-25 14:27:25.767375] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:20.928 [2024-11-25 14:27:25.767384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:29:20.928 [2024-11-25 14:27:25.767396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:29:20.928 [2024-11-25 14:27:25.767402] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:20.928 [2024-11-25 14:27:25.767407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:20.928 [2024-11-25 14:27:25.767412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:29:20.928 [2024-11-25 14:27:25.767418] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:29:20.928 [2024-11-25 14:27:25.767422] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:29:20.928 [2024-11-25 14:27:25.767428] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:29:20.928 [2024-11-25 14:27:25.767447] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.928 [2024-11-25 14:27:25.767451] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11b4690) 00:29:20.928 [2024-11-25 14:27:25.767458] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.928 [2024-11-25 14:27:25.767465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.928 [2024-11-25 14:27:25.767469] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.928 [2024-11-25 14:27:25.767473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11b4690) 00:29:20.928 [2024-11-25 14:27:25.767479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.928 [2024-11-25 14:27:25.767494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216700, cid 4, qid 0 00:29:20.928 [2024-11-25 14:27:25.767500] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216880, cid 5, qid 0 00:29:20.928 [2024-11-25 14:27:25.767639] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.928 [2024-11-25 14:27:25.767645] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.928 [2024-11-25 14:27:25.767649] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.928 [2024-11-25 14:27:25.767653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216700) on tqpair=0x11b4690 00:29:20.928 [2024-11-25 14:27:25.767660] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.928 [2024-11-25 14:27:25.767666] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.928 [2024-11-25 14:27:25.767669] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.928 [2024-11-25 14:27:25.767673] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216880) on tqpair=0x11b4690 00:29:20.928 [2024-11-25 14:27:25.767682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.928 [2024-11-25 14:27:25.767686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11b4690) 00:29:20.928 [2024-11-25 14:27:25.767692] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.928 [2024-11-25 14:27:25.767703] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216880, cid 5, qid 0 00:29:20.928 [2024-11-25 14:27:25.767970] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.928 [2024-11-25 14:27:25.767976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.928 [2024-11-25 14:27:25.767982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.928 [2024-11-25 14:27:25.767986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216880) on tqpair=0x11b4690 00:29:20.928 [2024-11-25 14:27:25.767995] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.928 [2024-11-25 14:27:25.767999] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11b4690) 00:29:20.928 [2024-11-25 14:27:25.768005] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.928 [2024-11-25 14:27:25.768015] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216880, cid 5, qid 0 00:29:20.928 [2024-11-25 14:27:25.768301] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.928 [2024-11-25 14:27:25.768308] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.928 [2024-11-25 14:27:25.768311] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.928 [2024-11-25 14:27:25.768315] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216880) on tqpair=0x11b4690 00:29:20.928 [2024-11-25 14:27:25.768325] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.928 [2024-11-25 14:27:25.768329] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11b4690) 00:29:20.928 [2024-11-25 14:27:25.768335] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.928 [2024-11-25 14:27:25.768346] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216880, cid 5, qid 0 00:29:20.928 [2024-11-25 14:27:25.768573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.928 [2024-11-25 14:27:25.768579] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.928 [2024-11-25 14:27:25.768583] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.928 [2024-11-25 14:27:25.768587] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216880) on tqpair=0x11b4690 00:29:20.928 [2024-11-25 14:27:25.768603] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.929 [2024-11-25 14:27:25.768607] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11b4690) 00:29:20.929 [2024-11-25 14:27:25.768614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.929 [2024-11-25 14:27:25.768622] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.929 [2024-11-25 14:27:25.768625] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11b4690) 00:29:20.929 [2024-11-25 14:27:25.768631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.929 [2024-11-25 14:27:25.768639] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.929 [2024-11-25 14:27:25.768642] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x11b4690) 00:29:20.929 [2024-11-25 14:27:25.768649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.929 [2024-11-25 14:27:25.768656] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.929 [2024-11-25 14:27:25.768660] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x11b4690) 00:29:20.929 [2024-11-25 14:27:25.768666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.929 [2024-11-25 14:27:25.768678] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216880, cid 5, qid 0 00:29:20.929 [2024-11-25 14:27:25.768685] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216700, cid 4, qid 0 00:29:20.929 [2024-11-25 14:27:25.768691] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216a00, cid 6, qid 0 00:29:20.929 [2024-11-25 14:27:25.768702] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216b80, cid 7, qid 0 00:29:20.929 [2024-11-25 14:27:25.769032] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:20.929 [2024-11-25 14:27:25.769039] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:20.929 [2024-11-25 14:27:25.769043] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:20.929 [2024-11-25 14:27:25.769046] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11b4690): datao=0, datal=8192, cccid=5 00:29:20.929 [2024-11-25 14:27:25.769051] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1216880) on tqpair(0x11b4690): expected_datao=0, payload_size=8192 00:29:20.929 [2024-11-25 14:27:25.769055] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.929 [2024-11-25 14:27:25.769087] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:20.929 [2024-11-25 14:27:25.769091] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:20.929 [2024-11-25 14:27:25.769097] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:20.929 [2024-11-25 14:27:25.769103] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:20.929 [2024-11-25 14:27:25.769106] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:20.929 [2024-11-25 14:27:25.769110] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11b4690): datao=0, datal=512, cccid=4 00:29:20.929 [2024-11-25 14:27:25.769114] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1216700) on tqpair(0x11b4690): expected_datao=0, payload_size=512 00:29:20.929 [2024-11-25 14:27:25.769119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.929 [2024-11-25 14:27:25.769125] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:20.929 [2024-11-25 14:27:25.769129] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:20.929 [2024-11-25 14:27:25.769134] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:20.929 [2024-11-25 14:27:25.769140] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:20.929 [2024-11-25 14:27:25.769144] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:20.929 [2024-11-25 14:27:25.769147] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11b4690): datao=0, datal=512, cccid=6 00:29:20.929 [2024-11-25 14:27:25.769151] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1216a00) on tqpair(0x11b4690): expected_datao=0, payload_size=512 00:29:20.929 [2024-11-25 14:27:25.769156] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.929 [2024-11-25 14:27:25.769170] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:20.929 [2024-11-25 14:27:25.769174] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:20.929 [2024-11-25 14:27:25.769180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:20.929 [2024-11-25 14:27:25.769186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:20.929 [2024-11-25 14:27:25.769189] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:20.929 [2024-11-25 14:27:25.769193] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11b4690): datao=0, datal=4096, cccid=7 00:29:20.929 [2024-11-25 14:27:25.769197] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1216b80) on tqpair(0x11b4690): expected_datao=0, payload_size=4096 00:29:20.929 [2024-11-25 14:27:25.769201] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.929 [2024-11-25 14:27:25.769218] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:20.929 [2024-11-25 14:27:25.769221] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:20.929 [2024-11-25 14:27:25.814172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.929 [2024-11-25 14:27:25.814182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.929 [2024-11-25 14:27:25.814185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.929 [2024-11-25 14:27:25.814189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216880) on tqpair=0x11b4690 00:29:20.929 [2024-11-25 14:27:25.814204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.929 [2024-11-25 14:27:25.814213] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.929 [2024-11-25 14:27:25.814216] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.929 [2024-11-25 14:27:25.814220] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216700) on tqpair=0x11b4690 00:29:20.929 [2024-11-25 14:27:25.814231] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.929 [2024-11-25 14:27:25.814237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.929 [2024-11-25 14:27:25.814241] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.929 [2024-11-25 14:27:25.814244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216a00) on tqpair=0x11b4690 00:29:20.929 [2024-11-25 14:27:25.814251] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.929 [2024-11-25 14:27:25.814257] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.929 [2024-11-25 14:27:25.814261] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.929 [2024-11-25 14:27:25.814265] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216b80) on tqpair=0x11b4690 00:29:20.929 ===================================================== 00:29:20.929 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:20.929 ===================================================== 00:29:20.929 Controller Capabilities/Features 00:29:20.929 ================================ 00:29:20.929 Vendor ID: 8086 00:29:20.929 Subsystem Vendor ID: 8086 00:29:20.929 Serial Number: SPDK00000000000001 00:29:20.929 Model Number: SPDK bdev Controller 00:29:20.929 Firmware Version: 25.01 00:29:20.929 Recommended Arb Burst: 6 00:29:20.929 IEEE OUI Identifier: e4 d2 5c 00:29:20.929 Multi-path I/O 00:29:20.929 May have multiple subsystem ports: Yes 00:29:20.929 May have multiple controllers: Yes 00:29:20.929 Associated with SR-IOV VF: No 00:29:20.929 Max Data Transfer Size: 131072 00:29:20.929 Max Number of Namespaces: 32 00:29:20.929 Max Number of I/O Queues: 127 00:29:20.929 NVMe Specification Version (VS): 1.3 00:29:20.929 NVMe Specification Version (Identify): 1.3 00:29:20.929 Maximum Queue Entries: 128 00:29:20.929 Contiguous Queues Required: Yes 00:29:20.929 Arbitration Mechanisms Supported 00:29:20.929 Weighted Round Robin: Not Supported 00:29:20.929 Vendor Specific: Not Supported 00:29:20.929 Reset Timeout: 15000 ms 00:29:20.929 Doorbell Stride: 4 bytes 00:29:20.929 NVM Subsystem Reset: Not Supported 00:29:20.929 Command Sets Supported 00:29:20.929 NVM Command Set: Supported 00:29:20.929 Boot Partition: Not Supported 00:29:20.929 Memory Page Size Minimum: 4096 bytes 00:29:20.929 Memory Page Size Maximum: 4096 bytes 00:29:20.929 Persistent Memory Region: Not Supported 00:29:20.929 Optional Asynchronous Events Supported 00:29:20.929 Namespace Attribute Notices: Supported 00:29:20.929 Firmware Activation Notices: Not Supported 00:29:20.929 ANA Change Notices: Not Supported 00:29:20.929 PLE Aggregate Log Change Notices: Not Supported 00:29:20.929 LBA Status Info Alert Notices: Not Supported 00:29:20.929 EGE Aggregate Log Change Notices: Not Supported 00:29:20.929 Normal NVM Subsystem Shutdown event: Not Supported 00:29:20.929 Zone Descriptor Change Notices: Not Supported 00:29:20.929 Discovery Log Change Notices: Not Supported 00:29:20.929 Controller Attributes 00:29:20.929 128-bit Host Identifier: Supported 00:29:20.929 Non-Operational Permissive Mode: Not Supported 00:29:20.929 NVM Sets: Not Supported 00:29:20.929 Read Recovery Levels: Not Supported 00:29:20.929 Endurance Groups: Not Supported 00:29:20.929 Predictable Latency Mode: Not Supported 00:29:20.929 Traffic Based Keep ALive: Not Supported 00:29:20.929 Namespace Granularity: Not Supported 00:29:20.929 SQ Associations: Not Supported 00:29:20.929 UUID List: Not Supported 00:29:20.929 Multi-Domain Subsystem: Not Supported 00:29:20.929 Fixed Capacity Management: Not Supported 00:29:20.929 Variable Capacity Management: Not Supported 00:29:20.929 Delete Endurance Group: Not Supported 00:29:20.929 Delete NVM Set: Not Supported 00:29:20.929 Extended LBA Formats Supported: Not Supported 00:29:20.929 Flexible Data Placement Supported: Not Supported 00:29:20.929 00:29:20.929 Controller Memory Buffer Support 00:29:20.929 ================================ 00:29:20.929 Supported: No 00:29:20.929 00:29:20.929 Persistent Memory Region Support 00:29:20.929 ================================ 00:29:20.929 Supported: No 00:29:20.929 00:29:20.929 Admin Command Set Attributes 00:29:20.929 ============================ 00:29:20.929 Security Send/Receive: Not Supported 00:29:20.929 Format NVM: Not Supported 00:29:20.929 Firmware Activate/Download: Not Supported 00:29:20.930 Namespace Management: Not Supported 00:29:20.930 Device Self-Test: Not Supported 00:29:20.930 Directives: Not Supported 00:29:20.930 NVMe-MI: Not Supported 00:29:20.930 Virtualization Management: Not Supported 00:29:20.930 Doorbell Buffer Config: Not Supported 00:29:20.930 Get LBA Status Capability: Not Supported 00:29:20.930 Command & Feature Lockdown Capability: Not Supported 00:29:20.930 Abort Command Limit: 4 00:29:20.930 Async Event Request Limit: 4 00:29:20.930 Number of Firmware Slots: N/A 00:29:20.930 Firmware Slot 1 Read-Only: N/A 00:29:20.930 Firmware Activation Without Reset: N/A 00:29:20.930 Multiple Update Detection Support: N/A 00:29:20.930 Firmware Update Granularity: No Information Provided 00:29:20.930 Per-Namespace SMART Log: No 00:29:20.930 Asymmetric Namespace Access Log Page: Not Supported 00:29:20.930 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:20.930 Command Effects Log Page: Supported 00:29:20.930 Get Log Page Extended Data: Supported 00:29:20.930 Telemetry Log Pages: Not Supported 00:29:20.930 Persistent Event Log Pages: Not Supported 00:29:20.930 Supported Log Pages Log Page: May Support 00:29:20.930 Commands Supported & Effects Log Page: Not Supported 00:29:20.930 Feature Identifiers & Effects Log Page:May Support 00:29:20.930 NVMe-MI Commands & Effects Log Page: May Support 00:29:20.930 Data Area 4 for Telemetry Log: Not Supported 00:29:20.930 Error Log Page Entries Supported: 128 00:29:20.930 Keep Alive: Supported 00:29:20.930 Keep Alive Granularity: 10000 ms 00:29:20.930 00:29:20.930 NVM Command Set Attributes 00:29:20.930 ========================== 00:29:20.930 Submission Queue Entry Size 00:29:20.930 Max: 64 00:29:20.930 Min: 64 00:29:20.930 Completion Queue Entry Size 00:29:20.930 Max: 16 00:29:20.930 Min: 16 00:29:20.930 Number of Namespaces: 32 00:29:20.930 Compare Command: Supported 00:29:20.930 Write Uncorrectable Command: Not Supported 00:29:20.930 Dataset Management Command: Supported 00:29:20.930 Write Zeroes Command: Supported 00:29:20.930 Set Features Save Field: Not Supported 00:29:20.930 Reservations: Supported 00:29:20.930 Timestamp: Not Supported 00:29:20.930 Copy: Supported 00:29:20.930 Volatile Write Cache: Present 00:29:20.930 Atomic Write Unit (Normal): 1 00:29:20.930 Atomic Write Unit (PFail): 1 00:29:20.930 Atomic Compare & Write Unit: 1 00:29:20.930 Fused Compare & Write: Supported 00:29:20.930 Scatter-Gather List 00:29:20.930 SGL Command Set: Supported 00:29:20.930 SGL Keyed: Supported 00:29:20.930 SGL Bit Bucket Descriptor: Not Supported 00:29:20.930 SGL Metadata Pointer: Not Supported 00:29:20.930 Oversized SGL: Not Supported 00:29:20.930 SGL Metadata Address: Not Supported 00:29:20.930 SGL Offset: Supported 00:29:20.930 Transport SGL Data Block: Not Supported 00:29:20.930 Replay Protected Memory Block: Not Supported 00:29:20.930 00:29:20.930 Firmware Slot Information 00:29:20.930 ========================= 00:29:20.930 Active slot: 1 00:29:20.930 Slot 1 Firmware Revision: 25.01 00:29:20.930 00:29:20.930 00:29:20.930 Commands Supported and Effects 00:29:20.930 ============================== 00:29:20.930 Admin Commands 00:29:20.930 -------------- 00:29:20.930 Get Log Page (02h): Supported 00:29:20.930 Identify (06h): Supported 00:29:20.930 Abort (08h): Supported 00:29:20.930 Set Features (09h): Supported 00:29:20.930 Get Features (0Ah): Supported 00:29:20.930 Asynchronous Event Request (0Ch): Supported 00:29:20.930 Keep Alive (18h): Supported 00:29:20.930 I/O Commands 00:29:20.930 ------------ 00:29:20.930 Flush (00h): Supported LBA-Change 00:29:20.930 Write (01h): Supported LBA-Change 00:29:20.930 Read (02h): Supported 00:29:20.930 Compare (05h): Supported 00:29:20.930 Write Zeroes (08h): Supported LBA-Change 00:29:20.930 Dataset Management (09h): Supported LBA-Change 00:29:20.930 Copy (19h): Supported LBA-Change 00:29:20.930 00:29:20.930 Error Log 00:29:20.930 ========= 00:29:20.930 00:29:20.930 Arbitration 00:29:20.930 =========== 00:29:20.930 Arbitration Burst: 1 00:29:20.930 00:29:20.930 Power Management 00:29:20.930 ================ 00:29:20.930 Number of Power States: 1 00:29:20.930 Current Power State: Power State #0 00:29:20.930 Power State #0: 00:29:20.930 Max Power: 0.00 W 00:29:20.930 Non-Operational State: Operational 00:29:20.930 Entry Latency: Not Reported 00:29:20.930 Exit Latency: Not Reported 00:29:20.930 Relative Read Throughput: 0 00:29:20.930 Relative Read Latency: 0 00:29:20.930 Relative Write Throughput: 0 00:29:20.930 Relative Write Latency: 0 00:29:20.930 Idle Power: Not Reported 00:29:20.930 Active Power: Not Reported 00:29:20.930 Non-Operational Permissive Mode: Not Supported 00:29:20.930 00:29:20.930 Health Information 00:29:20.930 ================== 00:29:20.930 Critical Warnings: 00:29:20.930 Available Spare Space: OK 00:29:20.930 Temperature: OK 00:29:20.930 Device Reliability: OK 00:29:20.930 Read Only: No 00:29:20.930 Volatile Memory Backup: OK 00:29:20.930 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:20.930 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:20.930 Available Spare: 0% 00:29:20.930 Available Spare Threshold: 0% 00:29:20.930 Life Percentage Used:[2024-11-25 14:27:25.814367] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.930 [2024-11-25 14:27:25.814372] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x11b4690) 00:29:20.930 [2024-11-25 14:27:25.814380] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.930 [2024-11-25 14:27:25.814393] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216b80, cid 7, qid 0 00:29:20.930 [2024-11-25 14:27:25.814614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.930 [2024-11-25 14:27:25.814620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.930 [2024-11-25 14:27:25.814624] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.930 [2024-11-25 14:27:25.814628] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216b80) on tqpair=0x11b4690 00:29:20.930 [2024-11-25 14:27:25.814664] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:29:20.930 [2024-11-25 14:27:25.814673] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216100) on tqpair=0x11b4690 00:29:20.930 [2024-11-25 14:27:25.814680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.930 [2024-11-25 14:27:25.814685] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216280) on tqpair=0x11b4690 00:29:20.930 [2024-11-25 14:27:25.814690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.930 [2024-11-25 14:27:25.814695] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216400) on tqpair=0x11b4690 00:29:20.930 [2024-11-25 14:27:25.814700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.930 [2024-11-25 14:27:25.814705] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216580) on tqpair=0x11b4690 00:29:20.930 [2024-11-25 14:27:25.814709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.930 [2024-11-25 14:27:25.814719] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.930 [2024-11-25 14:27:25.814723] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.930 [2024-11-25 14:27:25.814726] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11b4690) 00:29:20.930 [2024-11-25 14:27:25.814734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.930 [2024-11-25 14:27:25.814746] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216580, cid 3, qid 0 00:29:20.930 [2024-11-25 14:27:25.814959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.930 [2024-11-25 14:27:25.814968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.930 [2024-11-25 14:27:25.814971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.930 [2024-11-25 14:27:25.814975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216580) on tqpair=0x11b4690 00:29:20.930 [2024-11-25 14:27:25.814982] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.930 [2024-11-25 14:27:25.814986] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.930 [2024-11-25 14:27:25.814990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11b4690) 00:29:20.930 [2024-11-25 14:27:25.814996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.930 [2024-11-25 14:27:25.815011] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216580, cid 3, qid 0 00:29:20.930 [2024-11-25 14:27:25.815223] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.930 [2024-11-25 14:27:25.815229] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.930 [2024-11-25 14:27:25.815233] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.930 [2024-11-25 14:27:25.815237] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216580) on tqpair=0x11b4690 00:29:20.930 [2024-11-25 14:27:25.815242] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:29:20.930 [2024-11-25 14:27:25.815246] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:29:20.930 [2024-11-25 14:27:25.815256] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.930 [2024-11-25 14:27:25.815260] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.930 [2024-11-25 14:27:25.815263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11b4690) 00:29:20.931 [2024-11-25 14:27:25.815270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.931 [2024-11-25 14:27:25.815281] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216580, cid 3, qid 0 00:29:20.931 [2024-11-25 14:27:25.815522] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.931 [2024-11-25 14:27:25.815528] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.931 [2024-11-25 14:27:25.815532] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.815535] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216580) on tqpair=0x11b4690 00:29:20.931 [2024-11-25 14:27:25.815545] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.815549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.815553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11b4690) 00:29:20.931 [2024-11-25 14:27:25.815560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.931 [2024-11-25 14:27:25.815570] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216580, cid 3, qid 0 00:29:20.931 [2024-11-25 14:27:25.815824] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.931 [2024-11-25 14:27:25.815830] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.931 [2024-11-25 14:27:25.815834] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.815837] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216580) on tqpair=0x11b4690 00:29:20.931 [2024-11-25 14:27:25.815847] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.815851] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.815854] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11b4690) 00:29:20.931 [2024-11-25 14:27:25.815861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.931 [2024-11-25 14:27:25.815871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216580, cid 3, qid 0 00:29:20.931 [2024-11-25 14:27:25.816061] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.931 [2024-11-25 14:27:25.816067] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.931 [2024-11-25 14:27:25.816071] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.816075] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216580) on tqpair=0x11b4690 00:29:20.931 [2024-11-25 14:27:25.816085] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.816089] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.816092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11b4690) 00:29:20.931 [2024-11-25 14:27:25.816099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.931 [2024-11-25 14:27:25.816109] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216580, cid 3, qid 0 00:29:20.931 [2024-11-25 14:27:25.816280] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.931 [2024-11-25 14:27:25.816286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.931 [2024-11-25 14:27:25.816290] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.816294] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216580) on tqpair=0x11b4690 00:29:20.931 [2024-11-25 14:27:25.816303] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.816307] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.816311] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11b4690) 00:29:20.931 [2024-11-25 14:27:25.816317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.931 [2024-11-25 14:27:25.816328] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216580, cid 3, qid 0 00:29:20.931 [2024-11-25 14:27:25.816529] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.931 [2024-11-25 14:27:25.816536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.931 [2024-11-25 14:27:25.816540] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.816544] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216580) on tqpair=0x11b4690 00:29:20.931 [2024-11-25 14:27:25.816553] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.816557] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.816560] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11b4690) 00:29:20.931 [2024-11-25 14:27:25.816567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.931 [2024-11-25 14:27:25.816577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216580, cid 3, qid 0 00:29:20.931 [2024-11-25 14:27:25.816781] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.931 [2024-11-25 14:27:25.816787] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.931 [2024-11-25 14:27:25.816790] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.816794] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216580) on tqpair=0x11b4690 00:29:20.931 [2024-11-25 14:27:25.816804] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.816808] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.816811] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11b4690) 00:29:20.931 [2024-11-25 14:27:25.816818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.931 [2024-11-25 14:27:25.816828] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216580, cid 3, qid 0 00:29:20.931 [2024-11-25 14:27:25.817042] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.931 [2024-11-25 14:27:25.817051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.931 [2024-11-25 14:27:25.817055] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.817059] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216580) on tqpair=0x11b4690 00:29:20.931 [2024-11-25 14:27:25.817069] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.817073] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.817077] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11b4690) 00:29:20.931 [2024-11-25 14:27:25.817083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.931 [2024-11-25 14:27:25.817094] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216580, cid 3, qid 0 00:29:20.931 [2024-11-25 14:27:25.817288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.931 [2024-11-25 14:27:25.817295] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.931 [2024-11-25 14:27:25.817298] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.817302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216580) on tqpair=0x11b4690 00:29:20.931 [2024-11-25 14:27:25.817312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.817315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.817319] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11b4690) 00:29:20.931 [2024-11-25 14:27:25.817326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.931 [2024-11-25 14:27:25.817336] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216580, cid 3, qid 0 00:29:20.931 [2024-11-25 14:27:25.817540] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.931 [2024-11-25 14:27:25.817546] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.931 [2024-11-25 14:27:25.817550] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.817554] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216580) on tqpair=0x11b4690 00:29:20.931 [2024-11-25 14:27:25.817563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.817567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.817570] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11b4690) 00:29:20.931 [2024-11-25 14:27:25.817577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.931 [2024-11-25 14:27:25.817588] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216580, cid 3, qid 0 00:29:20.931 [2024-11-25 14:27:25.817789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.931 [2024-11-25 14:27:25.817796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.931 [2024-11-25 14:27:25.817799] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.817803] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216580) on tqpair=0x11b4690 00:29:20.931 [2024-11-25 14:27:25.817813] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.817817] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.817821] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11b4690) 00:29:20.931 [2024-11-25 14:27:25.817827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.931 [2024-11-25 14:27:25.817838] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216580, cid 3, qid 0 00:29:20.931 [2024-11-25 14:27:25.818045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.931 [2024-11-25 14:27:25.818051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.931 [2024-11-25 14:27:25.818058] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.818062] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216580) on tqpair=0x11b4690 00:29:20.931 [2024-11-25 14:27:25.818072] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.818076] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.818079] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11b4690) 00:29:20.931 [2024-11-25 14:27:25.818086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.931 [2024-11-25 14:27:25.818096] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1216580, cid 3, qid 0 00:29:20.931 [2024-11-25 14:27:25.822170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:20.931 [2024-11-25 14:27:25.822179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:20.931 [2024-11-25 14:27:25.822183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:20.931 [2024-11-25 14:27:25.822187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1216580) on tqpair=0x11b4690 00:29:20.931 [2024-11-25 14:27:25.822195] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:29:20.931 0% 00:29:20.931 Data Units Read: 0 00:29:20.931 Data Units Written: 0 00:29:20.931 Host Read Commands: 0 00:29:20.932 Host Write Commands: 0 00:29:20.932 Controller Busy Time: 0 minutes 00:29:20.932 Power Cycles: 0 00:29:20.932 Power On Hours: 0 hours 00:29:20.932 Unsafe Shutdowns: 0 00:29:20.932 Unrecoverable Media Errors: 0 00:29:20.932 Lifetime Error Log Entries: 0 00:29:20.932 Warning Temperature Time: 0 minutes 00:29:20.932 Critical Temperature Time: 0 minutes 00:29:20.932 00:29:20.932 Number of Queues 00:29:20.932 ================ 00:29:20.932 Number of I/O Submission Queues: 127 00:29:20.932 Number of I/O Completion Queues: 127 00:29:20.932 00:29:20.932 Active Namespaces 00:29:20.932 ================= 00:29:20.932 Namespace ID:1 00:29:20.932 Error Recovery Timeout: Unlimited 00:29:20.932 Command Set Identifier: NVM (00h) 00:29:20.932 Deallocate: Supported 00:29:20.932 Deallocated/Unwritten Error: Not Supported 00:29:20.932 Deallocated Read Value: Unknown 00:29:20.932 Deallocate in Write Zeroes: Not Supported 00:29:20.932 Deallocated Guard Field: 0xFFFF 00:29:20.932 Flush: Supported 00:29:20.932 Reservation: Supported 00:29:20.932 Namespace Sharing Capabilities: Multiple Controllers 00:29:20.932 Size (in LBAs): 131072 (0GiB) 00:29:20.932 Capacity (in LBAs): 131072 (0GiB) 00:29:20.932 Utilization (in LBAs): 131072 (0GiB) 00:29:20.932 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:20.932 EUI64: ABCDEF0123456789 00:29:20.932 UUID: 9804e4c2-074f-4ed6-b6c1-5606f04c5cee 00:29:20.932 Thin Provisioning: Not Supported 00:29:20.932 Per-NS Atomic Units: Yes 00:29:20.932 Atomic Boundary Size (Normal): 0 00:29:20.932 Atomic Boundary Size (PFail): 0 00:29:20.932 Atomic Boundary Offset: 0 00:29:20.932 Maximum Single Source Range Length: 65535 00:29:20.932 Maximum Copy Length: 65535 00:29:20.932 Maximum Source Range Count: 1 00:29:20.932 NGUID/EUI64 Never Reused: No 00:29:20.932 Namespace Write Protected: No 00:29:20.932 Number of LBA Formats: 1 00:29:20.932 Current LBA Format: LBA Format #00 00:29:20.932 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:20.932 00:29:20.932 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:20.932 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:20.932 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.932 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:20.932 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.932 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:20.932 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:20.932 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:20.932 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:20.932 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:20.932 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:20.932 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:20.932 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:20.932 rmmod nvme_tcp 00:29:20.932 rmmod nvme_fabrics 00:29:20.932 rmmod nvme_keyring 00:29:20.932 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:20.932 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:20.932 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:20.932 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3529771 ']' 00:29:20.932 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3529771 00:29:20.932 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3529771 ']' 00:29:20.932 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3529771 00:29:20.932 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:29:20.932 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:20.932 14:27:25 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3529771 00:29:20.932 14:27:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:20.932 14:27:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:20.932 14:27:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3529771' 00:29:20.932 killing process with pid 3529771 00:29:20.932 14:27:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3529771 00:29:20.932 14:27:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3529771 00:29:21.192 14:27:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:21.192 14:27:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:21.192 14:27:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:21.192 14:27:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:21.192 14:27:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:29:21.192 14:27:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:21.192 14:27:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:29:21.192 14:27:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:21.192 14:27:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:21.193 14:27:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.193 14:27:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.193 14:27:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:23.738 00:29:23.738 real 0m11.846s 00:29:23.738 user 0m9.368s 00:29:23.738 sys 0m6.210s 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:23.738 ************************************ 00:29:23.738 END TEST nvmf_identify 00:29:23.738 ************************************ 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:23.738 ************************************ 00:29:23.738 START TEST nvmf_perf 00:29:23.738 ************************************ 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:23.738 * Looking for test storage... 00:29:23.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:23.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.738 --rc genhtml_branch_coverage=1 00:29:23.738 --rc genhtml_function_coverage=1 00:29:23.738 --rc genhtml_legend=1 00:29:23.738 --rc geninfo_all_blocks=1 00:29:23.738 --rc geninfo_unexecuted_blocks=1 00:29:23.738 00:29:23.738 ' 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:23.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.738 --rc genhtml_branch_coverage=1 00:29:23.738 --rc genhtml_function_coverage=1 00:29:23.738 --rc genhtml_legend=1 00:29:23.738 --rc geninfo_all_blocks=1 00:29:23.738 --rc geninfo_unexecuted_blocks=1 00:29:23.738 00:29:23.738 ' 00:29:23.738 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:23.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.738 --rc genhtml_branch_coverage=1 00:29:23.738 --rc genhtml_function_coverage=1 00:29:23.738 --rc genhtml_legend=1 00:29:23.738 --rc geninfo_all_blocks=1 00:29:23.739 --rc geninfo_unexecuted_blocks=1 00:29:23.739 00:29:23.739 ' 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:23.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:23.739 --rc genhtml_branch_coverage=1 00:29:23.739 --rc genhtml_function_coverage=1 00:29:23.739 --rc genhtml_legend=1 00:29:23.739 --rc geninfo_all_blocks=1 00:29:23.739 --rc geninfo_unexecuted_blocks=1 00:29:23.739 00:29:23.739 ' 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:23.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:23.739 14:27:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:31.983 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:31.983 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:31.983 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:31.983 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:31.983 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:31.983 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:31.983 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:31.983 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:29:31.983 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:31.983 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:29:31.983 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:29:31.983 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:29:31.983 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:29:31.983 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:29:31.983 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:31.983 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:31.983 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:31.983 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:31.983 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:31.983 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:31.983 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:31.983 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:31.983 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:31.984 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:31.984 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:31.984 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:31.984 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:31.984 14:27:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:31.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:31.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:29:31.984 00:29:31.984 --- 10.0.0.2 ping statistics --- 00:29:31.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.984 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:31.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:31.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:29:31.984 00:29:31.984 --- 10.0.0.1 ping statistics --- 00:29:31.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.984 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3534157 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3534157 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3534157 ']' 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:31.984 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:31.984 [2024-11-25 14:27:36.171965] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:29:31.984 [2024-11-25 14:27:36.172033] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:31.984 [2024-11-25 14:27:36.271252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:31.984 [2024-11-25 14:27:36.324280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:31.984 [2024-11-25 14:27:36.324333] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:31.984 [2024-11-25 14:27:36.324342] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:31.984 [2024-11-25 14:27:36.324349] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:31.985 [2024-11-25 14:27:36.324356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:31.985 [2024-11-25 14:27:36.326327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.985 [2024-11-25 14:27:36.326506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:31.985 [2024-11-25 14:27:36.326669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.985 [2024-11-25 14:27:36.326669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:31.985 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:31.985 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:29:31.985 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:31.985 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:31.985 14:27:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:31.985 14:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.985 14:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:31.985 14:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:32.556 14:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:32.556 14:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:32.817 14:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:29:32.817 14:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:33.079 14:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:33.079 14:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:29:33.079 14:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:33.079 14:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:33.079 14:27:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:33.079 [2024-11-25 14:27:38.141203] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:33.340 14:27:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:33.340 14:27:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:33.340 14:27:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:33.601 14:27:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:33.601 14:27:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:33.863 14:27:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:33.863 [2024-11-25 14:27:38.948884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.124 14:27:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:34.124 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:29:34.124 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:29:34.124 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:34.124 14:27:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:29:35.513 Initializing NVMe Controllers 00:29:35.513 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:29:35.513 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:29:35.513 Initialization complete. Launching workers. 00:29:35.513 ======================================================== 00:29:35.513 Latency(us) 00:29:35.514 Device Information : IOPS MiB/s Average min max 00:29:35.514 PCIE (0000:65:00.0) NSID 1 from core 0: 76991.52 300.75 414.93 35.71 4640.78 00:29:35.514 ======================================================== 00:29:35.514 Total : 76991.52 300.75 414.93 35.71 4640.78 00:29:35.514 00:29:35.514 14:27:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:36.900 Initializing NVMe Controllers 00:29:36.900 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:36.900 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:36.900 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:36.900 Initialization complete. Launching workers. 00:29:36.900 ======================================================== 00:29:36.900 Latency(us) 00:29:36.900 Device Information : IOPS MiB/s Average min max 00:29:36.900 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 113.00 0.44 8936.73 229.86 47642.05 00:29:36.900 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 17974.83 7967.96 49881.85 00:29:36.900 ======================================================== 00:29:36.900 Total : 169.00 0.66 11931.60 229.86 49881.85 00:29:36.900 00:29:36.900 14:27:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:38.284 Initializing NVMe Controllers 00:29:38.284 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:38.284 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:38.284 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:38.284 Initialization complete. Launching workers. 00:29:38.284 ======================================================== 00:29:38.284 Latency(us) 00:29:38.284 Device Information : IOPS MiB/s Average min max 00:29:38.284 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11824.99 46.19 2710.32 431.46 6166.08 00:29:38.284 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3836.00 14.98 8380.07 6895.83 15992.54 00:29:38.284 ======================================================== 00:29:38.284 Total : 15660.98 61.18 4099.07 431.46 15992.54 00:29:38.284 00:29:38.284 14:27:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:38.284 14:27:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:38.284 14:27:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:40.828 Initializing NVMe Controllers 00:29:40.828 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:40.828 Controller IO queue size 128, less than required. 00:29:40.828 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:40.828 Controller IO queue size 128, less than required. 00:29:40.828 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:40.828 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:40.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:40.829 Initialization complete. Launching workers. 00:29:40.829 ======================================================== 00:29:40.829 Latency(us) 00:29:40.829 Device Information : IOPS MiB/s Average min max 00:29:40.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2070.72 517.68 62734.19 33249.35 107167.18 00:29:40.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 632.04 158.01 214726.57 48982.10 325863.52 00:29:40.829 ======================================================== 00:29:40.829 Total : 2702.76 675.69 98277.63 33249.35 325863.52 00:29:40.829 00:29:40.829 14:27:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:40.829 No valid NVMe controllers or AIO or URING devices found 00:29:40.829 Initializing NVMe Controllers 00:29:40.829 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:40.829 Controller IO queue size 128, less than required. 00:29:40.829 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:40.829 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:40.829 Controller IO queue size 128, less than required. 00:29:40.829 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:40.829 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:40.829 WARNING: Some requested NVMe devices were skipped 00:29:40.829 14:27:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:43.373 Initializing NVMe Controllers 00:29:43.373 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:43.373 Controller IO queue size 128, less than required. 00:29:43.373 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:43.373 Controller IO queue size 128, less than required. 00:29:43.373 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:43.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:43.373 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:43.373 Initialization complete. Launching workers. 00:29:43.373 00:29:43.373 ==================== 00:29:43.373 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:43.373 TCP transport: 00:29:43.373 polls: 42440 00:29:43.373 idle_polls: 23477 00:29:43.373 sock_completions: 18963 00:29:43.373 nvme_completions: 8107 00:29:43.373 submitted_requests: 12190 00:29:43.373 queued_requests: 1 00:29:43.373 00:29:43.373 ==================== 00:29:43.373 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:43.373 TCP transport: 00:29:43.373 polls: 32665 00:29:43.373 idle_polls: 18959 00:29:43.373 sock_completions: 13706 00:29:43.373 nvme_completions: 7363 00:29:43.373 submitted_requests: 11046 00:29:43.373 queued_requests: 1 00:29:43.373 ======================================================== 00:29:43.373 Latency(us) 00:29:43.373 Device Information : IOPS MiB/s Average min max 00:29:43.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2025.68 506.42 64946.77 34502.90 103892.25 00:29:43.373 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1839.76 459.94 69625.22 36694.73 124838.21 00:29:43.373 ======================================================== 00:29:43.373 Total : 3865.44 966.36 67173.48 34502.90 124838.21 00:29:43.373 00:29:43.373 14:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:43.373 14:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:43.373 14:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:29:43.373 14:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:43.373 14:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:43.373 14:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:43.373 14:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:29:43.373 14:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:43.373 14:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:29:43.373 14:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:43.373 14:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:43.373 rmmod nvme_tcp 00:29:43.373 rmmod nvme_fabrics 00:29:43.373 rmmod nvme_keyring 00:29:43.633 14:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:43.633 14:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:29:43.633 14:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:29:43.633 14:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3534157 ']' 00:29:43.633 14:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3534157 00:29:43.633 14:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3534157 ']' 00:29:43.633 14:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3534157 00:29:43.633 14:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:29:43.633 14:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:43.633 14:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3534157 00:29:43.633 14:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:43.633 14:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:43.633 14:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3534157' 00:29:43.633 killing process with pid 3534157 00:29:43.633 14:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3534157 00:29:43.633 14:27:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3534157 00:29:45.546 14:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:45.546 14:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:45.546 14:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:45.546 14:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:29:45.546 14:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:29:45.546 14:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:45.546 14:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:29:45.546 14:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:45.546 14:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:45.546 14:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.546 14:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.546 14:27:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.091 14:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:48.091 00:29:48.091 real 0m24.225s 00:29:48.091 user 0m58.365s 00:29:48.091 sys 0m8.637s 00:29:48.091 14:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:48.091 14:27:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:48.091 ************************************ 00:29:48.091 END TEST nvmf_perf 00:29:48.091 ************************************ 00:29:48.091 14:27:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:48.091 14:27:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:48.091 14:27:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:48.091 14:27:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.091 ************************************ 00:29:48.091 START TEST nvmf_fio_host 00:29:48.091 ************************************ 00:29:48.091 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:48.091 * Looking for test storage... 00:29:48.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:48.091 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:48.091 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:48.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.092 --rc genhtml_branch_coverage=1 00:29:48.092 --rc genhtml_function_coverage=1 00:29:48.092 --rc genhtml_legend=1 00:29:48.092 --rc geninfo_all_blocks=1 00:29:48.092 --rc geninfo_unexecuted_blocks=1 00:29:48.092 00:29:48.092 ' 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:48.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.092 --rc genhtml_branch_coverage=1 00:29:48.092 --rc genhtml_function_coverage=1 00:29:48.092 --rc genhtml_legend=1 00:29:48.092 --rc geninfo_all_blocks=1 00:29:48.092 --rc geninfo_unexecuted_blocks=1 00:29:48.092 00:29:48.092 ' 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:48.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.092 --rc genhtml_branch_coverage=1 00:29:48.092 --rc genhtml_function_coverage=1 00:29:48.092 --rc genhtml_legend=1 00:29:48.092 --rc geninfo_all_blocks=1 00:29:48.092 --rc geninfo_unexecuted_blocks=1 00:29:48.092 00:29:48.092 ' 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:48.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.092 --rc genhtml_branch_coverage=1 00:29:48.092 --rc genhtml_function_coverage=1 00:29:48.092 --rc genhtml_legend=1 00:29:48.092 --rc geninfo_all_blocks=1 00:29:48.092 --rc geninfo_unexecuted_blocks=1 00:29:48.092 00:29:48.092 ' 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.092 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:48.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:29:48.093 14:27:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.239 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:56.239 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:29:56.239 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:56.239 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:56.240 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:56.240 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:56.240 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:56.240 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:56.240 14:27:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:56.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:56.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:29:56.240 00:29:56.240 --- 10.0.0.2 ping statistics --- 00:29:56.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.240 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:56.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:56.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:29:56.240 00:29:56.240 --- 10.0.0.1 ping statistics --- 00:29:56.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.240 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:56.240 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:56.241 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.241 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3541199 00:29:56.241 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:56.241 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:56.241 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3541199 00:29:56.241 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3541199 ']' 00:29:56.241 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.241 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:56.241 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.241 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:56.241 14:28:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.241 [2024-11-25 14:28:00.411222] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:29:56.241 [2024-11-25 14:28:00.411314] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:56.241 [2024-11-25 14:28:00.512332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:56.241 [2024-11-25 14:28:00.564764] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:56.241 [2024-11-25 14:28:00.564816] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:56.241 [2024-11-25 14:28:00.564825] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:56.241 [2024-11-25 14:28:00.564832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:56.241 [2024-11-25 14:28:00.564839] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:56.241 [2024-11-25 14:28:00.566914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.241 [2024-11-25 14:28:00.567074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:56.241 [2024-11-25 14:28:00.567236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:56.241 [2024-11-25 14:28:00.567238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.241 14:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:56.241 14:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:29:56.241 14:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:56.502 [2024-11-25 14:28:01.403816] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.502 14:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:56.502 14:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:56.502 14:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:56.502 14:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:56.763 Malloc1 00:29:56.764 14:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:57.024 14:28:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:57.285 14:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:57.285 [2024-11-25 14:28:02.266964] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:57.285 14:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:57.546 14:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:57.546 14:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:57.546 14:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:57.546 14:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:29:57.546 14:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:57.546 14:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:29:57.546 14:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:57.546 14:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:29:57.546 14:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:29:57.546 14:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:57.546 14:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:57.546 14:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:29:57.546 14:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:57.546 14:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:29:57.546 14:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:29:57.546 14:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:57.546 14:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:57.546 14:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:29:57.546 14:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:57.546 14:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:29:57.546 14:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:29:57.546 14:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:57.546 14:28:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:57.807 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:57.807 fio-3.35 00:29:57.807 Starting 1 thread 00:30:00.352 00:30:00.352 test: (groupid=0, jobs=1): err= 0: pid=3541895: Mon Nov 25 14:28:05 2024 00:30:00.352 read: IOPS=13.8k, BW=53.7MiB/s (56.3MB/s)(108MiB/2004msec) 00:30:00.352 slat (usec): min=2, max=213, avg= 2.13, stdev= 1.78 00:30:00.352 clat (usec): min=2840, max=8769, avg=5108.99, stdev=365.79 00:30:00.352 lat (usec): min=2874, max=8775, avg=5111.12, stdev=365.78 00:30:00.352 clat percentiles (usec): 00:30:00.352 | 1.00th=[ 4228], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:30:00.352 | 30.00th=[ 4948], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5211], 00:30:00.352 | 70.00th=[ 5276], 80.00th=[ 5407], 90.00th=[ 5538], 95.00th=[ 5669], 00:30:00.352 | 99.00th=[ 5932], 99.50th=[ 6128], 99.90th=[ 7177], 99.95th=[ 8160], 00:30:00.352 | 99.99th=[ 8717] 00:30:00.352 bw ( KiB/s): min=53496, max=55648, per=99.97%, avg=54990.00, stdev=1007.18, samples=4 00:30:00.352 iops : min=13374, max=13912, avg=13747.50, stdev=251.80, samples=4 00:30:00.352 write: IOPS=13.7k, BW=53.6MiB/s (56.2MB/s)(108MiB/2004msec); 0 zone resets 00:30:00.352 slat (usec): min=2, max=194, avg= 2.20, stdev= 1.32 00:30:00.352 clat (usec): min=2270, max=8174, avg=4140.82, stdev=304.75 00:30:00.352 lat (usec): min=2288, max=8176, avg=4143.02, stdev=304.79 00:30:00.352 clat percentiles (usec): 00:30:00.352 | 1.00th=[ 3425], 5.00th=[ 3687], 10.00th=[ 3785], 20.00th=[ 3916], 00:30:00.352 | 30.00th=[ 4015], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4228], 00:30:00.352 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4555], 00:30:00.352 | 99.00th=[ 4817], 99.50th=[ 5014], 99.90th=[ 6128], 99.95th=[ 7046], 00:30:00.352 | 99.99th=[ 7701] 00:30:00.352 bw ( KiB/s): min=54024, max=55384, per=99.97%, avg=54914.00, stdev=606.08, samples=4 00:30:00.352 iops : min=13506, max=13846, avg=13728.50, stdev=151.52, samples=4 00:30:00.352 lat (msec) : 4=15.30%, 10=84.70% 00:30:00.352 cpu : usr=75.64%, sys=23.07%, ctx=26, majf=0, minf=16 00:30:00.352 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:30:00.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:00.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:00.352 issued rwts: total=27558,27521,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:00.352 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:00.352 00:30:00.352 Run status group 0 (all jobs): 00:30:00.352 READ: bw=53.7MiB/s (56.3MB/s), 53.7MiB/s-53.7MiB/s (56.3MB/s-56.3MB/s), io=108MiB (113MB), run=2004-2004msec 00:30:00.352 WRITE: bw=53.6MiB/s (56.2MB/s), 53.6MiB/s-53.6MiB/s (56.2MB/s-56.2MB/s), io=108MiB (113MB), run=2004-2004msec 00:30:00.352 14:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:00.352 14:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:00.352 14:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:00.352 14:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:00.352 14:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:00.352 14:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:00.352 14:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:30:00.352 14:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:00.352 14:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:00.352 14:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:00.352 14:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:30:00.352 14:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:00.352 14:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:00.352 14:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:00.352 14:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:00.352 14:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:00.352 14:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:30:00.352 14:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:00.352 14:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:00.352 14:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:00.352 14:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:00.352 14:28:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:00.920 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:00.920 fio-3.35 00:30:00.920 Starting 1 thread 00:30:02.305 [2024-11-25 14:28:07.164670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d975d0 is same with the state(6) to be set 00:30:02.305 [2024-11-25 14:28:07.164720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d975d0 is same with the state(6) to be set 00:30:03.245 00:30:03.245 test: (groupid=0, jobs=1): err= 0: pid=3542561: Mon Nov 25 14:28:08 2024 00:30:03.245 read: IOPS=9394, BW=147MiB/s (154MB/s)(295MiB/2008msec) 00:30:03.245 slat (usec): min=3, max=139, avg= 3.61, stdev= 1.70 00:30:03.245 clat (usec): min=2083, max=51692, avg=8417.47, stdev=3782.15 00:30:03.245 lat (usec): min=2086, max=51696, avg=8421.08, stdev=3782.20 00:30:03.245 clat percentiles (usec): 00:30:03.245 | 1.00th=[ 4228], 5.00th=[ 5145], 10.00th=[ 5604], 20.00th=[ 6325], 00:30:03.245 | 30.00th=[ 6915], 40.00th=[ 7504], 50.00th=[ 8160], 60.00th=[ 8717], 00:30:03.245 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11600], 00:30:03.245 | 99.00th=[13304], 99.50th=[46400], 99.90th=[50594], 99.95th=[51119], 00:30:03.245 | 99.99th=[51643] 00:30:03.245 bw ( KiB/s): min=69376, max=80224, per=49.41%, avg=74272.00, stdev=4796.23, samples=4 00:30:03.245 iops : min= 4336, max= 5014, avg=4642.00, stdev=299.76, samples=4 00:30:03.245 write: IOPS=5470, BW=85.5MiB/s (89.6MB/s)(151MiB/1769msec); 0 zone resets 00:30:03.245 slat (usec): min=39, max=331, avg=40.86, stdev= 7.05 00:30:03.245 clat (usec): min=2243, max=15851, avg=9110.01, stdev=1387.81 00:30:03.245 lat (usec): min=2283, max=15891, avg=9150.87, stdev=1389.24 00:30:03.245 clat percentiles (usec): 00:30:03.245 | 1.00th=[ 6456], 5.00th=[ 7177], 10.00th=[ 7504], 20.00th=[ 7898], 00:30:03.245 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9372], 00:30:03.245 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[10945], 95.00th=[11600], 00:30:03.245 | 99.00th=[12780], 99.50th=[13435], 99.90th=[14091], 99.95th=[14484], 00:30:03.245 | 99.99th=[15795] 00:30:03.245 bw ( KiB/s): min=71968, max=83104, per=88.24%, avg=77240.00, stdev=5054.20, samples=4 00:30:03.245 iops : min= 4498, max= 5194, avg=4827.50, stdev=315.89, samples=4 00:30:03.245 lat (msec) : 4=0.53%, 10=79.27%, 20=19.76%, 50=0.33%, 100=0.11% 00:30:03.245 cpu : usr=84.65%, sys=14.05%, ctx=17, majf=0, minf=24 00:30:03.245 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:30:03.245 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:03.245 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:03.245 issued rwts: total=18865,9678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:03.245 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:03.245 00:30:03.245 Run status group 0 (all jobs): 00:30:03.245 READ: bw=147MiB/s (154MB/s), 147MiB/s-147MiB/s (154MB/s-154MB/s), io=295MiB (309MB), run=2008-2008msec 00:30:03.245 WRITE: bw=85.5MiB/s (89.6MB/s), 85.5MiB/s-85.5MiB/s (89.6MB/s-89.6MB/s), io=151MiB (159MB), run=1769-1769msec 00:30:03.245 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:03.245 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:30:03.245 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:03.245 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:03.245 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:03.245 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:03.245 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:30:03.245 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:03.245 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:30:03.245 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:03.245 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:03.245 rmmod nvme_tcp 00:30:03.245 rmmod nvme_fabrics 00:30:03.245 rmmod nvme_keyring 00:30:03.245 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:03.245 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:30:03.245 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:30:03.245 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3541199 ']' 00:30:03.245 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3541199 00:30:03.245 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3541199 ']' 00:30:03.245 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3541199 00:30:03.245 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:30:03.245 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:03.245 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3541199 00:30:03.507 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:03.507 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:03.507 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3541199' 00:30:03.507 killing process with pid 3541199 00:30:03.507 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3541199 00:30:03.507 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3541199 00:30:03.507 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:03.507 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:03.507 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:03.507 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:30:03.507 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:30:03.507 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:30:03.507 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:03.507 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:03.507 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:03.507 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:03.507 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:03.507 14:28:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:06.056 00:30:06.056 real 0m17.894s 00:30:06.056 user 1m10.061s 00:30:06.056 sys 0m7.660s 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.056 ************************************ 00:30:06.056 END TEST nvmf_fio_host 00:30:06.056 ************************************ 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:06.056 ************************************ 00:30:06.056 START TEST nvmf_failover 00:30:06.056 ************************************ 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:06.056 * Looking for test storage... 00:30:06.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:06.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.056 --rc genhtml_branch_coverage=1 00:30:06.056 --rc genhtml_function_coverage=1 00:30:06.056 --rc genhtml_legend=1 00:30:06.056 --rc geninfo_all_blocks=1 00:30:06.056 --rc geninfo_unexecuted_blocks=1 00:30:06.056 00:30:06.056 ' 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:06.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.056 --rc genhtml_branch_coverage=1 00:30:06.056 --rc genhtml_function_coverage=1 00:30:06.056 --rc genhtml_legend=1 00:30:06.056 --rc geninfo_all_blocks=1 00:30:06.056 --rc geninfo_unexecuted_blocks=1 00:30:06.056 00:30:06.056 ' 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:06.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.056 --rc genhtml_branch_coverage=1 00:30:06.056 --rc genhtml_function_coverage=1 00:30:06.056 --rc genhtml_legend=1 00:30:06.056 --rc geninfo_all_blocks=1 00:30:06.056 --rc geninfo_unexecuted_blocks=1 00:30:06.056 00:30:06.056 ' 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:06.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.056 --rc genhtml_branch_coverage=1 00:30:06.056 --rc genhtml_function_coverage=1 00:30:06.056 --rc genhtml_legend=1 00:30:06.056 --rc geninfo_all_blocks=1 00:30:06.056 --rc geninfo_unexecuted_blocks=1 00:30:06.056 00:30:06.056 ' 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:06.056 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:06.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:30:06.057 14:28:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:14.299 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:14.299 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:30:14.299 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:14.299 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:14.299 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:14.299 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:14.299 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:14.299 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:30:14.300 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:14.300 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:30:14.300 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:30:14.300 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:30:14.300 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:30:14.300 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:30:14.300 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:30:14.300 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:14.300 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:14.300 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:14.300 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:14.300 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:14.300 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:14.300 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:14.300 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:14.300 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:14.300 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:14.300 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:14.300 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:14.300 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:14.300 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:14.300 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:14.300 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:14.301 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:14.301 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:14.301 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:14.301 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:14.301 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:14.301 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:14.301 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:14.301 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.301 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.301 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:14.301 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:14.301 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:14.301 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:14.301 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:14.301 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:14.301 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.301 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.301 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:14.301 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:14.301 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:14.301 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:14.301 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:14.301 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.301 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:14.301 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.301 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:14.301 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:14.301 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.302 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:14.302 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:14.302 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.302 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:14.302 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.302 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:14.302 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.302 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:14.302 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:14.302 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.302 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:14.302 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:14.302 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.302 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:14.302 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:30:14.302 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:14.302 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:14.302 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:14.302 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:14.302 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:14.302 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:14.302 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:14.302 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:14.302 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:14.303 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:14.303 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:14.303 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:14.303 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:14.303 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:14.303 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:14.303 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:14.303 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:14.303 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:14.303 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:14.303 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:14.303 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:14.303 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:14.303 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:14.303 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:14.303 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:14.303 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:14.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:14.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:30:14.303 00:30:14.303 --- 10.0.0.2 ping statistics --- 00:30:14.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.303 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:30:14.303 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:14.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:14.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:30:14.304 00:30:14.304 --- 10.0.0.1 ping statistics --- 00:30:14.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.304 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:30:14.304 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:14.304 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:30:14.304 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:14.304 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:14.304 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:14.304 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:14.304 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:14.304 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:14.304 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:14.304 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:14.304 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:14.304 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:14.304 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:14.304 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3547223 00:30:14.304 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3547223 00:30:14.305 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:14.305 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3547223 ']' 00:30:14.305 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.305 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:14.305 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.305 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:14.305 14:28:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:14.305 [2024-11-25 14:28:18.482997] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:30:14.305 [2024-11-25 14:28:18.483066] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:14.305 [2024-11-25 14:28:18.583246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:14.305 [2024-11-25 14:28:18.634434] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:14.306 [2024-11-25 14:28:18.634487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:14.306 [2024-11-25 14:28:18.634495] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:14.306 [2024-11-25 14:28:18.634502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:14.306 [2024-11-25 14:28:18.634509] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:14.306 [2024-11-25 14:28:18.636310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:14.306 [2024-11-25 14:28:18.636541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:14.306 [2024-11-25 14:28:18.636543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.306 14:28:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:14.306 14:28:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:30:14.306 14:28:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:14.306 14:28:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:14.306 14:28:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:14.306 14:28:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:14.306 14:28:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:14.570 [2024-11-25 14:28:19.502421] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:14.570 14:28:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:14.832 Malloc0 00:30:14.832 14:28:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:15.093 14:28:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:15.093 14:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:15.354 [2024-11-25 14:28:20.319709] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.354 14:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:15.615 [2024-11-25 14:28:20.516295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:15.615 14:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:15.615 [2024-11-25 14:28:20.696821] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:15.876 14:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:15.876 14:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3547605 00:30:15.876 14:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:15.876 14:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3547605 /var/tmp/bdevperf.sock 00:30:15.876 14:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3547605 ']' 00:30:15.876 14:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:15.876 14:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:15.876 14:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:15.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:15.876 14:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:15.876 14:28:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:16.818 14:28:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:16.819 14:28:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:30:16.819 14:28:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:16.819 NVMe0n1 00:30:16.819 14:28:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:17.388 00:30:17.388 14:28:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:17.388 14:28:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3547930 00:30:17.388 14:28:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:18.330 14:28:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:18.330 [2024-11-25 14:28:23.393714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.393997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.394001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.394005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.394010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.394015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.394019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.394023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.394028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.330 [2024-11-25 14:28:23.394034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210d4f0 is same with the state(6) to be set 00:30:18.591 14:28:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:21.896 14:28:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:21.896 00:30:21.896 14:28:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:21.896 [2024-11-25 14:28:26.847618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.896 [2024-11-25 14:28:26.847968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.847973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.847978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.847982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.847987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.847991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.847996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 [2024-11-25 14:28:26.848232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210dfa0 is same with the state(6) to be set 00:30:21.897 14:28:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:25.197 14:28:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:25.197 [2024-11-25 14:28:30.036044] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:25.197 14:28:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:26.138 14:28:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:26.398 [2024-11-25 14:28:31.234358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.398 [2024-11-25 14:28:31.234399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.398 [2024-11-25 14:28:31.234406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.398 [2024-11-25 14:28:31.234411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.398 [2024-11-25 14:28:31.234416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.398 [2024-11-25 14:28:31.234421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.399 [2024-11-25 14:28:31.234426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.399 [2024-11-25 14:28:31.234430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.399 [2024-11-25 14:28:31.234435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.399 [2024-11-25 14:28:31.234439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.399 [2024-11-25 14:28:31.234443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.399 [2024-11-25 14:28:31.234448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.399 [2024-11-25 14:28:31.234452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.399 [2024-11-25 14:28:31.234457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.399 [2024-11-25 14:28:31.234461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.399 [2024-11-25 14:28:31.234465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.399 [2024-11-25 14:28:31.234470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.399 [2024-11-25 14:28:31.234475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.399 [2024-11-25 14:28:31.234479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.399 [2024-11-25 14:28:31.234484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.399 [2024-11-25 14:28:31.234488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.399 [2024-11-25 14:28:31.234492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.399 [2024-11-25 14:28:31.234497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.399 [2024-11-25 14:28:31.234501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.399 [2024-11-25 14:28:31.234506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.399 [2024-11-25 14:28:31.234511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.399 [2024-11-25 14:28:31.234515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.399 [2024-11-25 14:28:31.234520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.399 [2024-11-25 14:28:31.234525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.399 [2024-11-25 14:28:31.234530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.399 [2024-11-25 14:28:31.234534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd34c0 is same with the state(6) to be set 00:30:26.399 14:28:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3547930 00:30:32.987 { 00:30:32.987 "results": [ 00:30:32.988 { 00:30:32.988 "job": "NVMe0n1", 00:30:32.988 "core_mask": "0x1", 00:30:32.988 "workload": "verify", 00:30:32.988 "status": "finished", 00:30:32.988 "verify_range": { 00:30:32.988 "start": 0, 00:30:32.988 "length": 16384 00:30:32.988 }, 00:30:32.988 "queue_depth": 128, 00:30:32.988 "io_size": 4096, 00:30:32.988 "runtime": 15.005519, 00:30:32.988 "iops": 12463.214367993536, 00:30:32.988 "mibps": 48.68443112497475, 00:30:32.988 "io_failed": 8125, 00:30:32.988 "io_timeout": 0, 00:30:32.988 "avg_latency_us": 9821.593580059649, 00:30:32.988 "min_latency_us": 539.3066666666666, 00:30:32.988 "max_latency_us": 26323.626666666667 00:30:32.988 } 00:30:32.988 ], 00:30:32.988 "core_count": 1 00:30:32.988 } 00:30:32.988 14:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3547605 00:30:32.988 14:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3547605 ']' 00:30:32.988 14:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3547605 00:30:32.988 14:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:30:32.988 14:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:32.988 14:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3547605 00:30:32.988 14:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:32.988 14:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:32.988 14:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3547605' 00:30:32.988 killing process with pid 3547605 00:30:32.988 14:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3547605 00:30:32.988 14:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3547605 00:30:32.988 14:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:32.988 [2024-11-25 14:28:20.766286] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:30:32.988 [2024-11-25 14:28:20.766344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3547605 ] 00:30:32.988 [2024-11-25 14:28:20.857162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.988 [2024-11-25 14:28:20.892908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.988 Running I/O for 15 seconds... 00:30:32.988 11075.00 IOPS, 43.26 MiB/s [2024-11-25T13:28:38.078Z] [2024-11-25 14:28:23.394344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:95176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:95200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.988 [2024-11-25 14:28:23.394940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.988 [2024-11-25 14:28:23.394947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.394957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.394964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.394973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.394980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.394992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:95600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.989 [2024-11-25 14:28:23.395604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.989 [2024-11-25 14:28:23.395613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.395620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.395631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:95720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.395638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.395648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.395656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.395665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.395672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.395682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.395689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.395698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.395705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.395715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.395722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.395731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.395738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.395747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.395754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.395764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.395771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.395780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.395787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.395797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.395804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.395813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.395820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.395830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.395838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.395848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.395855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.395864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.395872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.395881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.395888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.395897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.395904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.395914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.395921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.395930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.395937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.395946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.395953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.395963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.395970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.395979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.395986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.395996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.396003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.396013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.396021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.396030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.396037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.396048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.396056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.396065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.396072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.396081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.396088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.396098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.396105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.396114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.396122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.396131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.396138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.396148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.396155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.396168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.396175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.396184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.396192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.396201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.396208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.396217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.396225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.396234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.396241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.396250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.396260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.396269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.990 [2024-11-25 14:28:23.396276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.990 [2024-11-25 14:28:23.396286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.991 [2024-11-25 14:28:23.396293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:23.396302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.991 [2024-11-25 14:28:23.396309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:23.396319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.991 [2024-11-25 14:28:23.396326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:23.396335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.991 [2024-11-25 14:28:23.396342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:23.396352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.991 [2024-11-25 14:28:23.396359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:23.396368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.991 [2024-11-25 14:28:23.396376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:23.396385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.991 [2024-11-25 14:28:23.396392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:23.396401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.991 [2024-11-25 14:28:23.396409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:23.396418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.991 [2024-11-25 14:28:23.396425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:23.396435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.991 [2024-11-25 14:28:23.396442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:23.396451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.991 [2024-11-25 14:28:23.396458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:23.396468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.991 [2024-11-25 14:28:23.396476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:23.396486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.991 [2024-11-25 14:28:23.396493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:23.396502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.991 [2024-11-25 14:28:23.396509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:23.396535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.991 [2024-11-25 14:28:23.396542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.991 [2024-11-25 14:28:23.396548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96144 len:8 PRP1 0x0 PRP2 0x0 00:30:32.991 [2024-11-25 14:28:23.396558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:23.396597] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:32.991 [2024-11-25 14:28:23.396619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:32.991 [2024-11-25 14:28:23.396627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:23.396636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:32.991 [2024-11-25 14:28:23.396644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:23.396652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:32.991 [2024-11-25 14:28:23.396659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:23.396667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:32.991 [2024-11-25 14:28:23.396674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:23.396682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:32.991 [2024-11-25 14:28:23.400236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:32.991 [2024-11-25 14:28:23.400258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e6df0 (9): Bad file descriptor 00:30:32.991 [2024-11-25 14:28:23.470246] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:30:32.991 10800.50 IOPS, 42.19 MiB/s [2024-11-25T13:28:38.081Z] 11075.00 IOPS, 43.26 MiB/s [2024-11-25T13:28:38.081Z] 11424.25 IOPS, 44.63 MiB/s [2024-11-25T13:28:38.081Z] [2024-11-25 14:28:26.850531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.991 [2024-11-25 14:28:26.850560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:26.850573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:47328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.991 [2024-11-25 14:28:26.850583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:26.850590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.991 [2024-11-25 14:28:26.850596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:26.850602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.991 [2024-11-25 14:28:26.850607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:26.850614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.991 [2024-11-25 14:28:26.850619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:26.850626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:47360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.991 [2024-11-25 14:28:26.850631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:26.850637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:47368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.991 [2024-11-25 14:28:26.850642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:26.850649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:47376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.991 [2024-11-25 14:28:26.850654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:26.850660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:47384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.991 [2024-11-25 14:28:26.850665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:26.850672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.991 [2024-11-25 14:28:26.850677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:26.850684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.991 [2024-11-25 14:28:26.850689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:26.850695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.991 [2024-11-25 14:28:26.850700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:26.850706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.991 [2024-11-25 14:28:26.850711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:26.850718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.991 [2024-11-25 14:28:26.850723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:26.850729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.991 [2024-11-25 14:28:26.850736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:26.850742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.991 [2024-11-25 14:28:26.850747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:26.850753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.991 [2024-11-25 14:28:26.850758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.991 [2024-11-25 14:28:26.850764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.991 [2024-11-25 14:28:26.850770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.850776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.850781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.850788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.850793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.850799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.850804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.850810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:47488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.850815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.850821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.850826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.850833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.850838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.850844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.850849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.850855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.850860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.850866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:47528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.850871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.850879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.850884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.850891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:47544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.850896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.850902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.850907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.850913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.850918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.850925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:47568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.850931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.850937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.850943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.850950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.850955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.850961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.850966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.850972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.850977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.850984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.850989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.850995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.851000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.851006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.851011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.851018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.851027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.851034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.851039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.851045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.851050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.851056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.851061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.851067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.851072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.851079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.851084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.851090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.851095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.851101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.851106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.851112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.851117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.851124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.851129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.851135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.851140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.992 [2024-11-25 14:28:26.851146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.992 [2024-11-25 14:28:26.851151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:47744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:47752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:47776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:47872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:47888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.993 [2024-11-25 14:28:26.851601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.993 [2024-11-25 14:28:26.851609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.994 [2024-11-25 14:28:26.851614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.994 [2024-11-25 14:28:26.851620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.994 [2024-11-25 14:28:26.851625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.994 [2024-11-25 14:28:26.851631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.994 [2024-11-25 14:28:26.851636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.994 [2024-11-25 14:28:26.851643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.994 [2024-11-25 14:28:26.851648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.994 [2024-11-25 14:28:26.851654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.994 [2024-11-25 14:28:26.851659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.994 [2024-11-25 14:28:26.851675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.994 [2024-11-25 14:28:26.851681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48080 len:8 PRP1 0x0 PRP2 0x0 00:30:32.994 [2024-11-25 14:28:26.851686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.994 [2024-11-25 14:28:26.851693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.994 [2024-11-25 14:28:26.851698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.994 [2024-11-25 14:28:26.851702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48088 len:8 PRP1 0x0 PRP2 0x0 00:30:32.994 [2024-11-25 14:28:26.851707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.994 [2024-11-25 14:28:26.851712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.994 [2024-11-25 14:28:26.851716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.994 [2024-11-25 14:28:26.851720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48096 len:8 PRP1 0x0 PRP2 0x0 00:30:32.994 [2024-11-25 14:28:26.851725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.994 [2024-11-25 14:28:26.851731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.994 [2024-11-25 14:28:26.851735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.994 [2024-11-25 14:28:26.851739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48104 len:8 PRP1 0x0 PRP2 0x0 00:30:32.994 [2024-11-25 14:28:26.851744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.994 [2024-11-25 14:28:26.851749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.994 [2024-11-25 14:28:26.851753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.994 [2024-11-25 14:28:26.851757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48112 len:8 PRP1 0x0 PRP2 0x0 00:30:32.994 [2024-11-25 14:28:26.851762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.994 [2024-11-25 14:28:26.851768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.994 [2024-11-25 14:28:26.851772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.994 [2024-11-25 14:28:26.851776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48120 len:8 PRP1 0x0 PRP2 0x0 00:30:32.994 [2024-11-25 14:28:26.851781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.994 [2024-11-25 14:28:26.851787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.994 [2024-11-25 14:28:26.851790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.994 [2024-11-25 14:28:26.851795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48128 len:8 PRP1 0x0 PRP2 0x0 00:30:32.994 [2024-11-25 14:28:26.851800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.994 [2024-11-25 14:28:26.851805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.994 [2024-11-25 14:28:26.851809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.994 [2024-11-25 14:28:26.851813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48136 len:8 PRP1 0x0 PRP2 0x0 00:30:32.994 [2024-11-25 14:28:26.851818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.994 [2024-11-25 14:28:26.851823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.994 [2024-11-25 14:28:26.851827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.994 [2024-11-25 14:28:26.851831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48144 len:8 PRP1 0x0 PRP2 0x0 00:30:32.994 [2024-11-25 14:28:26.851836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.994 [2024-11-25 14:28:26.851841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.994 [2024-11-25 14:28:26.851844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.994 [2024-11-25 14:28:26.851848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48152 len:8 PRP1 0x0 PRP2 0x0 00:30:32.994 [2024-11-25 14:28:26.851853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.994 [2024-11-25 14:28:26.851858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.994 [2024-11-25 14:28:26.851862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.994 [2024-11-25 14:28:26.851866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48160 len:8 PRP1 0x0 PRP2 0x0 00:30:32.994 [2024-11-25 14:28:26.851872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.994 [2024-11-25 14:28:26.851877] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.994 [2024-11-25 14:28:26.851881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.994 [2024-11-25 14:28:26.851885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48168 len:8 PRP1 0x0 PRP2 0x0 00:30:32.994 [2024-11-25 14:28:26.851890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.994 [2024-11-25 14:28:26.851895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.994 [2024-11-25 14:28:26.851899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.994 [2024-11-25 14:28:26.851903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48176 len:8 PRP1 0x0 PRP2 0x0 00:30:32.994 [2024-11-25 14:28:26.851909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.994 [2024-11-25 14:28:26.851914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.994 [2024-11-25 14:28:26.851918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.994 [2024-11-25 14:28:26.851922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48184 len:8 PRP1 0x0 PRP2 0x0 00:30:32.994 [2024-11-25 14:28:26.851927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.994 [2024-11-25 14:28:26.851932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.994 [2024-11-25 14:28:26.851936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.994 [2024-11-25 14:28:26.851940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48192 len:8 PRP1 0x0 PRP2 0x0 00:30:32.994 [2024-11-25 14:28:26.851945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.994 [2024-11-25 14:28:26.851950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.994 [2024-11-25 14:28:26.851954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.994 [2024-11-25 14:28:26.851958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48200 len:8 PRP1 0x0 PRP2 0x0 00:30:32.994 [2024-11-25 14:28:26.851963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.994 [2024-11-25 14:28:26.851968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.994 [2024-11-25 14:28:26.851972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.994 [2024-11-25 14:28:26.851977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48208 len:8 PRP1 0x0 PRP2 0x0 00:30:32.994 [2024-11-25 14:28:26.851982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.994 [2024-11-25 14:28:26.851987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.994 [2024-11-25 14:28:26.851991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.994 [2024-11-25 14:28:26.851996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48216 len:8 PRP1 0x0 PRP2 0x0 00:30:32.994 [2024-11-25 14:28:26.852000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.994 [2024-11-25 14:28:26.852006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.994 [2024-11-25 14:28:26.852010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.994 [2024-11-25 14:28:26.852014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48224 len:8 PRP1 0x0 PRP2 0x0 00:30:32.994 [2024-11-25 14:28:26.852019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.994 [2024-11-25 14:28:26.852024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.994 [2024-11-25 14:28:26.852028] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.994 [2024-11-25 14:28:26.852032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48232 len:8 PRP1 0x0 PRP2 0x0 00:30:32.994 [2024-11-25 14:28:26.852037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.994 [2024-11-25 14:28:26.852042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.994 [2024-11-25 14:28:26.852046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.994 [2024-11-25 14:28:26.852051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48240 len:8 PRP1 0x0 PRP2 0x0 00:30:32.994 [2024-11-25 14:28:26.852056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.994 [2024-11-25 14:28:26.852061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.995 [2024-11-25 14:28:26.852065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.995 [2024-11-25 14:28:26.852069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48248 len:8 PRP1 0x0 PRP2 0x0 00:30:32.995 [2024-11-25 14:28:26.852074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:26.852079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.995 [2024-11-25 14:28:26.852083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.995 [2024-11-25 14:28:26.869026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48256 len:8 PRP1 0x0 PRP2 0x0 00:30:32.995 [2024-11-25 14:28:26.869054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:26.869068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.995 [2024-11-25 14:28:26.869074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.995 [2024-11-25 14:28:26.869081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48264 len:8 PRP1 0x0 PRP2 0x0 00:30:32.995 [2024-11-25 14:28:26.869088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:26.869096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.995 [2024-11-25 14:28:26.869101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.995 [2024-11-25 14:28:26.869107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48272 len:8 PRP1 0x0 PRP2 0x0 00:30:32.995 [2024-11-25 14:28:26.869114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:26.869122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.995 [2024-11-25 14:28:26.869127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.995 [2024-11-25 14:28:26.869133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48280 len:8 PRP1 0x0 PRP2 0x0 00:30:32.995 [2024-11-25 14:28:26.869140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:26.869147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.995 [2024-11-25 14:28:26.869152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.995 [2024-11-25 14:28:26.869165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48288 len:8 PRP1 0x0 PRP2 0x0 00:30:32.995 [2024-11-25 14:28:26.869173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:26.869179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.995 [2024-11-25 14:28:26.869185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.995 [2024-11-25 14:28:26.869191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48296 len:8 PRP1 0x0 PRP2 0x0 00:30:32.995 [2024-11-25 14:28:26.869198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:26.869210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.995 [2024-11-25 14:28:26.869215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.995 [2024-11-25 14:28:26.869221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48304 len:8 PRP1 0x0 PRP2 0x0 00:30:32.995 [2024-11-25 14:28:26.869228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:26.869235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.995 [2024-11-25 14:28:26.869240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.995 [2024-11-25 14:28:26.869246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48312 len:8 PRP1 0x0 PRP2 0x0 00:30:32.995 [2024-11-25 14:28:26.869253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:26.869260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.995 [2024-11-25 14:28:26.869265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.995 [2024-11-25 14:28:26.869271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48320 len:8 PRP1 0x0 PRP2 0x0 00:30:32.995 [2024-11-25 14:28:26.869278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:26.869286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.995 [2024-11-25 14:28:26.869291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.995 [2024-11-25 14:28:26.869296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48328 len:8 PRP1 0x0 PRP2 0x0 00:30:32.995 [2024-11-25 14:28:26.869303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:26.869310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.995 [2024-11-25 14:28:26.869316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.995 [2024-11-25 14:28:26.869322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48336 len:8 PRP1 0x0 PRP2 0x0 00:30:32.995 [2024-11-25 14:28:26.869329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:26.869368] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:32.995 [2024-11-25 14:28:26.869398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:32.995 [2024-11-25 14:28:26.869406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:26.869416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:32.995 [2024-11-25 14:28:26.869423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:26.869431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:32.995 [2024-11-25 14:28:26.869438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:26.869446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:32.995 [2024-11-25 14:28:26.869453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:26.869462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:30:32.995 [2024-11-25 14:28:26.869503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e6df0 (9): Bad file descriptor 00:30:32.995 [2024-11-25 14:28:26.872743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:30:32.995 [2024-11-25 14:28:26.938578] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:30:32.995 11464.40 IOPS, 44.78 MiB/s [2024-11-25T13:28:38.085Z] 11723.50 IOPS, 45.79 MiB/s [2024-11-25T13:28:38.085Z] 11898.71 IOPS, 46.48 MiB/s [2024-11-25T13:28:38.085Z] 12021.62 IOPS, 46.96 MiB/s [2024-11-25T13:28:38.085Z] [2024-11-25 14:28:31.234950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:125432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.995 [2024-11-25 14:28:31.234979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:31.234992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:125440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.995 [2024-11-25 14:28:31.234999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:31.235006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:125448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.995 [2024-11-25 14:28:31.235011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:31.235018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.995 [2024-11-25 14:28:31.235023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:31.235030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:125464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.995 [2024-11-25 14:28:31.235035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:31.235042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:125472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.995 [2024-11-25 14:28:31.235047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:31.235054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:125480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.995 [2024-11-25 14:28:31.235059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:31.235066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:125488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.995 [2024-11-25 14:28:31.235072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:31.235079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:125496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.995 [2024-11-25 14:28:31.235084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:31.235090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.995 [2024-11-25 14:28:31.235096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:31.235102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:125512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.995 [2024-11-25 14:28:31.235111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:31.235118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:125520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.995 [2024-11-25 14:28:31.235123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.995 [2024-11-25 14:28:31.235129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:125528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:125536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:125544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:126256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.996 [2024-11-25 14:28:31.235173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:125552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:125560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:125568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:125576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:125584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:125608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:125624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:125640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:125648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:125656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:125664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:125672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:125680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:125688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:125696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:125704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:125712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:125720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:125728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:125736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:125744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:125752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:125768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:125776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:125792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:125800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.996 [2024-11-25 14:28:31.235549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.996 [2024-11-25 14:28:31.235555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:125808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:125816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:125824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:125832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:125840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:125848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:125864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:125872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:125880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:125888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:125896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:125904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:125912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:125920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:125928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:125936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:125944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:125952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:125960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:125976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:125984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:126264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.997 [2024-11-25 14:28:31.235838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:126272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.997 [2024-11-25 14:28:31.235850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:126280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.997 [2024-11-25 14:28:31.235862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:126288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.997 [2024-11-25 14:28:31.235873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:126296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.997 [2024-11-25 14:28:31.235884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:126304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.997 [2024-11-25 14:28:31.235896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:126312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.997 [2024-11-25 14:28:31.235907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:126000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:126032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.235994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:126056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.235999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.997 [2024-11-25 14:28:31.236005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.997 [2024-11-25 14:28:31.236010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.998 [2024-11-25 14:28:31.236021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.998 [2024-11-25 14:28:31.236033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.998 [2024-11-25 14:28:31.236044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.998 [2024-11-25 14:28:31.236056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.998 [2024-11-25 14:28:31.236067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.998 [2024-11-25 14:28:31.236078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.998 [2024-11-25 14:28:31.236090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.998 [2024-11-25 14:28:31.236101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.998 [2024-11-25 14:28:31.236112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.998 [2024-11-25 14:28:31.236124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.998 [2024-11-25 14:28:31.236138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.998 [2024-11-25 14:28:31.236149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.998 [2024-11-25 14:28:31.236164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.998 [2024-11-25 14:28:31.236175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.998 [2024-11-25 14:28:31.236187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.998 [2024-11-25 14:28:31.236199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.998 [2024-11-25 14:28:31.236210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.998 [2024-11-25 14:28:31.236222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.998 [2024-11-25 14:28:31.236233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.998 [2024-11-25 14:28:31.236245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.998 [2024-11-25 14:28:31.236256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.998 [2024-11-25 14:28:31.236267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.998 [2024-11-25 14:28:31.236279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:126320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.998 [2024-11-25 14:28:31.236292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:126328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.998 [2024-11-25 14:28:31.236303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:126336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.998 [2024-11-25 14:28:31.236314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:126344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.998 [2024-11-25 14:28:31.236325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:126352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.998 [2024-11-25 14:28:31.236336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:126360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.998 [2024-11-25 14:28:31.236348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:126368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.998 [2024-11-25 14:28:31.236359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:126376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.998 [2024-11-25 14:28:31.236370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:126384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.998 [2024-11-25 14:28:31.236382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.998 [2024-11-25 14:28:31.236393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:126400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.998 [2024-11-25 14:28:31.236404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:126408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.998 [2024-11-25 14:28:31.236416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:126416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.998 [2024-11-25 14:28:31.236428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:126424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.998 [2024-11-25 14:28:31.236440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:126432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.998 [2024-11-25 14:28:31.236451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.998 [2024-11-25 14:28:31.236458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:126440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.998 [2024-11-25 14:28:31.236462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.999 [2024-11-25 14:28:31.236479] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.999 [2024-11-25 14:28:31.236484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.999 [2024-11-25 14:28:31.236489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:126448 len:8 PRP1 0x0 PRP2 0x0 00:30:32.999 [2024-11-25 14:28:31.236495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.999 [2024-11-25 14:28:31.236530] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:32.999 [2024-11-25 14:28:31.236546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:32.999 [2024-11-25 14:28:31.236552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.999 [2024-11-25 14:28:31.236558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:32.999 [2024-11-25 14:28:31.236563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.999 [2024-11-25 14:28:31.236569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:32.999 [2024-11-25 14:28:31.236574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.999 [2024-11-25 14:28:31.236580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:32.999 [2024-11-25 14:28:31.236585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.999 [2024-11-25 14:28:31.236590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:30:32.999 [2024-11-25 14:28:31.239026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:30:32.999 [2024-11-25 14:28:31.239047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5e6df0 (9): Bad file descriptor 00:30:32.999 [2024-11-25 14:28:31.267538] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:30:32.999 12065.67 IOPS, 47.13 MiB/s [2024-11-25T13:28:38.089Z] 12154.00 IOPS, 47.48 MiB/s [2024-11-25T13:28:38.089Z] 12245.91 IOPS, 47.84 MiB/s [2024-11-25T13:28:38.089Z] 12298.92 IOPS, 48.04 MiB/s [2024-11-25T13:28:38.089Z] 12369.23 IOPS, 48.32 MiB/s [2024-11-25T13:28:38.089Z] 12439.29 IOPS, 48.59 MiB/s 00:30:32.999 Latency(us) 00:30:32.999 [2024-11-25T13:28:38.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.999 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:32.999 Verification LBA range: start 0x0 length 0x4000 00:30:32.999 NVMe0n1 : 15.01 12463.21 48.68 541.47 0.00 9821.59 539.31 26323.63 00:30:32.999 [2024-11-25T13:28:38.089Z] =================================================================================================================== 00:30:32.999 [2024-11-25T13:28:38.089Z] Total : 12463.21 48.68 541.47 0.00 9821.59 539.31 26323.63 00:30:32.999 Received shutdown signal, test time was about 15.000000 seconds 00:30:32.999 00:30:32.999 Latency(us) 00:30:32.999 [2024-11-25T13:28:38.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.999 [2024-11-25T13:28:38.089Z] =================================================================================================================== 00:30:32.999 [2024-11-25T13:28:38.089Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:32.999 14:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:32.999 14:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:32.999 14:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:32.999 14:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3550942 00:30:32.999 14:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3550942 /var/tmp/bdevperf.sock 00:30:32.999 14:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:32.999 14:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3550942 ']' 00:30:32.999 14:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:32.999 14:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:32.999 14:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:32.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:32.999 14:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:32.999 14:28:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:33.570 14:28:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:33.570 14:28:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:30:33.570 14:28:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:33.570 [2024-11-25 14:28:38.564645] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:33.570 14:28:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:33.831 [2024-11-25 14:28:38.745097] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:33.831 14:28:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:34.092 NVMe0n1 00:30:34.092 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:34.352 00:30:34.352 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:34.613 00:30:34.873 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:34.873 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:34.873 14:28:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:35.133 14:28:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:38.436 14:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:38.436 14:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:38.436 14:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3551956 00:30:38.436 14:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:38.436 14:28:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3551956 00:30:39.379 { 00:30:39.379 "results": [ 00:30:39.379 { 00:30:39.379 "job": "NVMe0n1", 00:30:39.379 "core_mask": "0x1", 00:30:39.379 "workload": "verify", 00:30:39.379 "status": "finished", 00:30:39.379 "verify_range": { 00:30:39.379 "start": 0, 00:30:39.379 "length": 16384 00:30:39.379 }, 00:30:39.379 "queue_depth": 128, 00:30:39.379 "io_size": 4096, 00:30:39.379 "runtime": 1.006767, 00:30:39.379 "iops": 12740.783120622746, 00:30:39.379 "mibps": 49.7686840649326, 00:30:39.379 "io_failed": 0, 00:30:39.379 "io_timeout": 0, 00:30:39.379 "avg_latency_us": 10012.24202697435, 00:30:39.379 "min_latency_us": 2157.2266666666665, 00:30:39.379 "max_latency_us": 8465.066666666668 00:30:39.379 } 00:30:39.379 ], 00:30:39.379 "core_count": 1 00:30:39.379 } 00:30:39.379 14:28:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:39.379 [2024-11-25 14:28:37.609346] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:30:39.379 [2024-11-25 14:28:37.609404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3550942 ] 00:30:39.379 [2024-11-25 14:28:37.693621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.379 [2024-11-25 14:28:37.723144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.380 [2024-11-25 14:28:40.054517] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:39.380 [2024-11-25 14:28:40.054564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:39.380 [2024-11-25 14:28:40.054574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.380 [2024-11-25 14:28:40.054581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:39.380 [2024-11-25 14:28:40.054586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.380 [2024-11-25 14:28:40.054592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:39.380 [2024-11-25 14:28:40.054597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.380 [2024-11-25 14:28:40.054602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:39.380 [2024-11-25 14:28:40.054607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.380 [2024-11-25 14:28:40.054613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:30:39.380 [2024-11-25 14:28:40.054636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:30:39.380 [2024-11-25 14:28:40.054647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9df0 (9): Bad file descriptor 00:30:39.380 [2024-11-25 14:28:40.105234] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:30:39.380 Running I/O for 1 seconds... 00:30:39.380 12699.00 IOPS, 49.61 MiB/s 00:30:39.380 Latency(us) 00:30:39.380 [2024-11-25T13:28:44.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:39.380 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:39.380 Verification LBA range: start 0x0 length 0x4000 00:30:39.380 NVMe0n1 : 1.01 12740.78 49.77 0.00 0.00 10012.24 2157.23 8465.07 00:30:39.380 [2024-11-25T13:28:44.470Z] =================================================================================================================== 00:30:39.380 [2024-11-25T13:28:44.470Z] Total : 12740.78 49.77 0.00 0.00 10012.24 2157.23 8465.07 00:30:39.380 14:28:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:39.380 14:28:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:39.641 14:28:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:39.901 14:28:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:39.901 14:28:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:39.901 14:28:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:40.163 14:28:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:43.469 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:43.469 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:43.469 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3550942 00:30:43.469 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3550942 ']' 00:30:43.469 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3550942 00:30:43.469 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:30:43.469 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:43.469 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3550942 00:30:43.469 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:43.469 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:43.469 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3550942' 00:30:43.469 killing process with pid 3550942 00:30:43.469 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3550942 00:30:43.469 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3550942 00:30:43.469 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:43.469 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:43.730 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:43.730 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:43.730 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:43.730 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:43.730 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:30:43.730 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:43.730 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:30:43.730 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:43.730 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:43.730 rmmod nvme_tcp 00:30:43.730 rmmod nvme_fabrics 00:30:43.730 rmmod nvme_keyring 00:30:43.730 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:43.730 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:30:43.730 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:30:43.730 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3547223 ']' 00:30:43.730 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3547223 00:30:43.730 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3547223 ']' 00:30:43.730 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3547223 00:30:43.730 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:30:43.730 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:43.730 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3547223 00:30:43.730 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:43.730 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:43.730 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3547223' 00:30:43.730 killing process with pid 3547223 00:30:43.730 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3547223 00:30:43.730 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3547223 00:30:43.993 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:43.993 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:43.993 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:43.993 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:30:43.993 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:30:43.993 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:43.993 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:30:43.993 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:43.993 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:43.993 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.993 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:43.993 14:28:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.542 14:28:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:46.542 00:30:46.542 real 0m40.355s 00:30:46.542 user 2m3.943s 00:30:46.542 sys 0m8.729s 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:46.542 ************************************ 00:30:46.542 END TEST nvmf_failover 00:30:46.542 ************************************ 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:46.542 ************************************ 00:30:46.542 START TEST nvmf_host_discovery 00:30:46.542 ************************************ 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:46.542 * Looking for test storage... 00:30:46.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:46.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.542 --rc genhtml_branch_coverage=1 00:30:46.542 --rc genhtml_function_coverage=1 00:30:46.542 --rc genhtml_legend=1 00:30:46.542 --rc geninfo_all_blocks=1 00:30:46.542 --rc geninfo_unexecuted_blocks=1 00:30:46.542 00:30:46.542 ' 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:46.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.542 --rc genhtml_branch_coverage=1 00:30:46.542 --rc genhtml_function_coverage=1 00:30:46.542 --rc genhtml_legend=1 00:30:46.542 --rc geninfo_all_blocks=1 00:30:46.542 --rc geninfo_unexecuted_blocks=1 00:30:46.542 00:30:46.542 ' 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:46.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.542 --rc genhtml_branch_coverage=1 00:30:46.542 --rc genhtml_function_coverage=1 00:30:46.542 --rc genhtml_legend=1 00:30:46.542 --rc geninfo_all_blocks=1 00:30:46.542 --rc geninfo_unexecuted_blocks=1 00:30:46.542 00:30:46.542 ' 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:46.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:46.542 --rc genhtml_branch_coverage=1 00:30:46.542 --rc genhtml_function_coverage=1 00:30:46.542 --rc genhtml_legend=1 00:30:46.542 --rc geninfo_all_blocks=1 00:30:46.542 --rc geninfo_unexecuted_blocks=1 00:30:46.542 00:30:46.542 ' 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:46.542 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:46.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:30:46.543 14:28:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:54.889 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:54.889 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:54.890 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:54.890 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:54.890 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:54.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:54.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:30:54.890 00:30:54.890 --- 10.0.0.2 ping statistics --- 00:30:54.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:54.890 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:54.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:54.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:30:54.890 00:30:54.890 --- 10.0.0.1 ping statistics --- 00:30:54.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:54.890 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3557296 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3557296 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3557296 ']' 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:54.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:54.890 14:28:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.890 [2024-11-25 14:28:58.850255] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:30:54.890 [2024-11-25 14:28:58.850323] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:54.890 [2024-11-25 14:28:58.951126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.890 [2024-11-25 14:28:59.002369] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:54.890 [2024-11-25 14:28:59.002414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:54.890 [2024-11-25 14:28:59.002423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:54.890 [2024-11-25 14:28:59.002431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:54.890 [2024-11-25 14:28:59.002438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:54.890 [2024-11-25 14:28:59.003140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:54.890 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:54.890 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:30:54.890 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:54.890 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:54.890 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.890 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:54.890 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:54.890 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.890 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.890 [2024-11-25 14:28:59.717993] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:54.891 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.891 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:54.891 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.891 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.891 [2024-11-25 14:28:59.730253] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:54.891 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.891 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:54.891 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.891 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.891 null0 00:30:54.891 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.891 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:54.891 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.891 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.891 null1 00:30:54.891 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.891 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:54.891 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.891 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.891 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.891 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3557425 00:30:54.891 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:54.891 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3557425 /tmp/host.sock 00:30:54.891 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3557425 ']' 00:30:54.891 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:30:54.891 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:54.891 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:54.891 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:54.891 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:54.891 14:28:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.891 [2024-11-25 14:28:59.826606] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:30:54.891 [2024-11-25 14:28:59.826672] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3557425 ] 00:30:54.891 [2024-11-25 14:28:59.919333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.891 [2024-11-25 14:28:59.973520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.833 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:55.834 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:55.834 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:55.834 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.834 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:30:55.834 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:55.834 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.834 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:55.834 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.834 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:30:55.834 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:55.834 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:55.834 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.834 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:55.834 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:55.834 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:55.834 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.095 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:30:56.095 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:30:56.095 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:56.095 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:56.095 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.095 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:56.095 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.095 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:56.095 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.095 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:56.095 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:56.095 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.095 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.095 [2024-11-25 14:29:00.985379] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:56.095 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.095 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:30:56.095 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:56.095 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:56.095 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.095 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.095 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:56.095 14:29:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.095 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:56.096 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.096 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:56.096 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.356 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:30:56.356 14:29:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:30:56.928 [2024-11-25 14:29:01.718163] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:56.928 [2024-11-25 14:29:01.718184] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:56.928 [2024-11-25 14:29:01.718198] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:56.928 [2024-11-25 14:29:01.807479] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:56.928 [2024-11-25 14:29:01.988620] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:30:56.928 [2024-11-25 14:29:01.989679] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x18f9840:1 started. 00:30:56.928 [2024-11-25 14:29:01.991295] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:56.928 [2024-11-25 14:29:01.991314] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:56.928 [2024-11-25 14:29:01.995515] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x18f9840 was disconnected and freed. delete nvme_qpair. 00:30:57.189 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:57.189 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:57.189 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:30:57.189 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:57.189 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:57.189 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.189 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:57.189 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:57.189 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:57.189 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.189 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:57.189 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:57.189 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:57.189 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:57.189 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:57.189 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:57.189 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:30:57.189 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:30:57.189 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:57.189 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:57.190 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.190 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:57.190 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:57.190 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:57.452 [2024-11-25 14:29:02.432025] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x18f9c60:1 started. 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:57.452 [2024-11-25 14:29:02.436464] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x18f9c60 was disconnected and freed. delete nvme_qpair. 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:57.452 [2024-11-25 14:29:02.525311] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:57.452 [2024-11-25 14:29:02.526145] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:57.452 [2024-11-25 14:29:02.526169] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:57.452 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:57.453 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.453 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:57.453 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:57.453 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:57.713 [2024-11-25 14:29:02.615439] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:30:57.713 14:29:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:30:57.974 [2024-11-25 14:29:02.924936] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:30:57.974 [2024-11-25 14:29:02.924976] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:57.974 [2024-11-25 14:29:02.924984] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:57.974 [2024-11-25 14:29:02.924990] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:58.918 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:58.918 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:58.918 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:30:58.918 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:58.918 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:58.918 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.918 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:58.918 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:58.918 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:58.918 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.918 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:58.918 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:58.918 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:30:58.918 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:58.918 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:58.918 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:58.918 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:58.918 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:58.918 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:58.918 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:30:58.918 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:58.918 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:58.919 [2024-11-25 14:29:03.797430] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:58.919 [2024-11-25 14:29:03.797453] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:58.919 [2024-11-25 14:29:03.803074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.919 [2024-11-25 14:29:03.803092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.919 [2024-11-25 14:29:03.803101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.919 [2024-11-25 14:29:03.803109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.919 [2024-11-25 14:29:03.803117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.919 [2024-11-25 14:29:03.803125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.919 [2024-11-25 14:29:03.803138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:58.919 [2024-11-25 14:29:03.803146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:58.919 [2024-11-25 14:29:03.803153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c9e90 is same with the state(6) to be set 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:58.919 [2024-11-25 14:29:03.813087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c9e90 (9): Bad file descriptor 00:30:58.919 [2024-11-25 14:29:03.823121] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:58.919 [2024-11-25 14:29:03.823134] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:58.919 [2024-11-25 14:29:03.823139] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:58.919 [2024-11-25 14:29:03.823145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:58.919 [2024-11-25 14:29:03.823166] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:58.919 [2024-11-25 14:29:03.823420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.919 [2024-11-25 14:29:03.823435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c9e90 with addr=10.0.0.2, port=4420 00:30:58.919 [2024-11-25 14:29:03.823443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c9e90 is same with the state(6) to be set 00:30:58.919 [2024-11-25 14:29:03.823455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c9e90 (9): Bad file descriptor 00:30:58.919 [2024-11-25 14:29:03.823467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:58.919 [2024-11-25 14:29:03.823474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:58.919 [2024-11-25 14:29:03.823482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:58.919 [2024-11-25 14:29:03.823489] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:58.919 [2024-11-25 14:29:03.823494] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:58.919 [2024-11-25 14:29:03.823499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.919 [2024-11-25 14:29:03.833196] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:58.919 [2024-11-25 14:29:03.833207] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:58.919 [2024-11-25 14:29:03.833212] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:58.919 [2024-11-25 14:29:03.833220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:58.919 [2024-11-25 14:29:03.833235] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:58.919 [2024-11-25 14:29:03.833525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.919 [2024-11-25 14:29:03.833537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c9e90 with addr=10.0.0.2, port=4420 00:30:58.919 [2024-11-25 14:29:03.833544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c9e90 is same with the state(6) to be set 00:30:58.919 [2024-11-25 14:29:03.833555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c9e90 (9): Bad file descriptor 00:30:58.919 [2024-11-25 14:29:03.833572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:58.919 [2024-11-25 14:29:03.833579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:58.919 [2024-11-25 14:29:03.833586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:58.919 [2024-11-25 14:29:03.833593] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:58.919 [2024-11-25 14:29:03.833597] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:58.919 [2024-11-25 14:29:03.833602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:58.919 [2024-11-25 14:29:03.843266] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:58.919 [2024-11-25 14:29:03.843280] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:58.919 [2024-11-25 14:29:03.843284] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:58.919 [2024-11-25 14:29:03.843289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:58.919 [2024-11-25 14:29:03.843304] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:58.919 [2024-11-25 14:29:03.843591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.919 [2024-11-25 14:29:03.843603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c9e90 with addr=10.0.0.2, port=4420 00:30:58.919 [2024-11-25 14:29:03.843610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c9e90 is same with the state(6) to be set 00:30:58.919 [2024-11-25 14:29:03.843622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c9e90 (9): Bad file descriptor 00:30:58.919 [2024-11-25 14:29:03.843644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:58.919 [2024-11-25 14:29:03.843651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:58.919 [2024-11-25 14:29:03.843659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:58.919 [2024-11-25 14:29:03.843665] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:58.919 [2024-11-25 14:29:03.843670] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:58.919 [2024-11-25 14:29:03.843674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:58.919 [2024-11-25 14:29:03.853335] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:58.919 [2024-11-25 14:29:03.853352] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:58.919 [2024-11-25 14:29:03.853357] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:58.919 [2024-11-25 14:29:03.853361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:58.919 [2024-11-25 14:29:03.853376] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:58.919 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:58.919 [2024-11-25 14:29:03.853672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.919 [2024-11-25 14:29:03.853685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c9e90 with addr=10.0.0.2, port=4420 00:30:58.919 [2024-11-25 14:29:03.853692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c9e90 is same with the state(6) to be set 00:30:58.919 [2024-11-25 14:29:03.853703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c9e90 (9): Bad file descriptor 00:30:58.920 [2024-11-25 14:29:03.853723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:58.920 [2024-11-25 14:29:03.853731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:58.920 [2024-11-25 14:29:03.853738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:58.920 [2024-11-25 14:29:03.853744] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:58.920 [2024-11-25 14:29:03.853749] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:58.920 [2024-11-25 14:29:03.853753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:58.920 [2024-11-25 14:29:03.863406] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:58.920 [2024-11-25 14:29:03.863418] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:58.920 [2024-11-25 14:29:03.863423] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:58.920 [2024-11-25 14:29:03.863427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:58.920 [2024-11-25 14:29:03.863441] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:58.920 [2024-11-25 14:29:03.863614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.920 [2024-11-25 14:29:03.863627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c9e90 with addr=10.0.0.2, port=4420 00:30:58.920 [2024-11-25 14:29:03.863634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c9e90 is same with the state(6) to be set 00:30:58.920 [2024-11-25 14:29:03.863646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c9e90 (9): Bad file descriptor 00:30:58.920 [2024-11-25 14:29:03.863656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:58.920 [2024-11-25 14:29:03.863663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:58.920 [2024-11-25 14:29:03.863670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:58.920 [2024-11-25 14:29:03.863676] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:58.920 [2024-11-25 14:29:03.863681] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:58.920 [2024-11-25 14:29:03.863685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:58.920 [2024-11-25 14:29:03.873473] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:58.920 [2024-11-25 14:29:03.873486] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:58.920 [2024-11-25 14:29:03.873491] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:58.920 [2024-11-25 14:29:03.873495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:58.920 [2024-11-25 14:29:03.873510] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:58.920 [2024-11-25 14:29:03.873797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.920 [2024-11-25 14:29:03.873809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c9e90 with addr=10.0.0.2, port=4420 00:30:58.920 [2024-11-25 14:29:03.873816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c9e90 is same with the state(6) to be set 00:30:58.920 [2024-11-25 14:29:03.873828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c9e90 (9): Bad file descriptor 00:30:58.920 [2024-11-25 14:29:03.873838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:58.920 [2024-11-25 14:29:03.873845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:58.920 [2024-11-25 14:29:03.873852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:58.920 [2024-11-25 14:29:03.873858] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:58.920 [2024-11-25 14:29:03.873863] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:58.920 [2024-11-25 14:29:03.873868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:58.920 [2024-11-25 14:29:03.883542] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:58.920 [2024-11-25 14:29:03.883553] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:58.920 [2024-11-25 14:29:03.883557] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:58.920 [2024-11-25 14:29:03.883562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:58.920 [2024-11-25 14:29:03.883579] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:58.920 [2024-11-25 14:29:03.883866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.920 [2024-11-25 14:29:03.883877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18c9e90 with addr=10.0.0.2, port=4420 00:30:58.920 [2024-11-25 14:29:03.883884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c9e90 is same with the state(6) to be set 00:30:58.920 [2024-11-25 14:29:03.883895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c9e90 (9): Bad file descriptor 00:30:58.920 [2024-11-25 14:29:03.883906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:58.920 [2024-11-25 14:29:03.883912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:58.920 [2024-11-25 14:29:03.883920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:58.920 [2024-11-25 14:29:03.883925] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:58.920 [2024-11-25 14:29:03.883930] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:58.920 [2024-11-25 14:29:03.883935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:58.920 [2024-11-25 14:29:03.884965] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:58.920 [2024-11-25 14:29:03.884983] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:30:58.920 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:58.921 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.921 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:58.921 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:58.921 14:29:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:30:59.182 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:59.183 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:59.183 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.183 14:29:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:00.566 [2024-11-25 14:29:05.237314] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:00.566 [2024-11-25 14:29:05.237328] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:00.566 [2024-11-25 14:29:05.237337] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:00.566 [2024-11-25 14:29:05.325597] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:00.566 [2024-11-25 14:29:05.633975] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:31:00.566 [2024-11-25 14:29:05.634636] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x19057a0:1 started. 00:31:00.566 [2024-11-25 14:29:05.635927] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:00.566 [2024-11-25 14:29:05.635948] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:00.566 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.566 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:00.566 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:31:00.566 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:00.566 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:00.566 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:00.566 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:00.566 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:00.566 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:00.566 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.566 [2024-11-25 14:29:05.644453] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x19057a0 was disconnected and freed. delete nvme_qpair. 00:31:00.566 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:00.566 request: 00:31:00.566 { 00:31:00.566 "name": "nvme", 00:31:00.566 "trtype": "tcp", 00:31:00.566 "traddr": "10.0.0.2", 00:31:00.566 "adrfam": "ipv4", 00:31:00.566 "trsvcid": "8009", 00:31:00.566 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:00.828 "wait_for_attach": true, 00:31:00.828 "method": "bdev_nvme_start_discovery", 00:31:00.828 "req_id": 1 00:31:00.828 } 00:31:00.828 Got JSON-RPC error response 00:31:00.828 response: 00:31:00.828 { 00:31:00.828 "code": -17, 00:31:00.828 "message": "File exists" 00:31:00.828 } 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:00.828 request: 00:31:00.828 { 00:31:00.828 "name": "nvme_second", 00:31:00.828 "trtype": "tcp", 00:31:00.828 "traddr": "10.0.0.2", 00:31:00.828 "adrfam": "ipv4", 00:31:00.828 "trsvcid": "8009", 00:31:00.828 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:00.828 "wait_for_attach": true, 00:31:00.828 "method": "bdev_nvme_start_discovery", 00:31:00.828 "req_id": 1 00:31:00.828 } 00:31:00.828 Got JSON-RPC error response 00:31:00.828 response: 00:31:00.828 { 00:31:00.828 "code": -17, 00:31:00.828 "message": "File exists" 00:31:00.828 } 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:00.828 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.829 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:00.829 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:00.829 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:31:00.829 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:00.829 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:00.829 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:00.829 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:00.829 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:00.829 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:00.829 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.829 14:29:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:02.214 [2024-11-25 14:29:06.900712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:02.214 [2024-11-25 14:29:06.900736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1905590 with addr=10.0.0.2, port=8010 00:31:02.214 [2024-11-25 14:29:06.900745] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:02.214 [2024-11-25 14:29:06.900751] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:02.214 [2024-11-25 14:29:06.900756] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:03.156 [2024-11-25 14:29:07.903080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.156 [2024-11-25 14:29:07.903098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1902810 with addr=10.0.0.2, port=8010 00:31:03.156 [2024-11-25 14:29:07.903106] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:03.156 [2024-11-25 14:29:07.903111] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:03.156 [2024-11-25 14:29:07.903117] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:04.097 [2024-11-25 14:29:08.905087] bdev_nvme.c:7522:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:04.097 request: 00:31:04.097 { 00:31:04.097 "name": "nvme_second", 00:31:04.097 "trtype": "tcp", 00:31:04.097 "traddr": "10.0.0.2", 00:31:04.097 "adrfam": "ipv4", 00:31:04.097 "trsvcid": "8010", 00:31:04.097 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:04.097 "wait_for_attach": false, 00:31:04.097 "attach_timeout_ms": 3000, 00:31:04.097 "method": "bdev_nvme_start_discovery", 00:31:04.097 "req_id": 1 00:31:04.097 } 00:31:04.097 Got JSON-RPC error response 00:31:04.097 response: 00:31:04.097 { 00:31:04.097 "code": -110, 00:31:04.097 "message": "Connection timed out" 00:31:04.097 } 00:31:04.097 14:29:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:04.097 14:29:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:31:04.097 14:29:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:04.097 14:29:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:04.098 14:29:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:04.098 14:29:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:04.098 14:29:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:04.098 14:29:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:04.098 14:29:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:04.098 14:29:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:04.098 14:29:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:04.098 14:29:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:04.098 14:29:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:04.098 14:29:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:04.098 14:29:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:04.098 14:29:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3557425 00:31:04.098 14:29:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:04.098 14:29:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:04.098 14:29:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:31:04.098 14:29:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:04.098 14:29:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:31:04.098 14:29:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:04.098 14:29:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:04.098 rmmod nvme_tcp 00:31:04.098 rmmod nvme_fabrics 00:31:04.098 rmmod nvme_keyring 00:31:04.098 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:04.098 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:31:04.098 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:31:04.098 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3557296 ']' 00:31:04.098 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3557296 00:31:04.098 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3557296 ']' 00:31:04.098 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3557296 00:31:04.098 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:31:04.098 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:04.098 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3557296 00:31:04.098 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:04.098 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:04.098 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3557296' 00:31:04.098 killing process with pid 3557296 00:31:04.098 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3557296 00:31:04.098 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3557296 00:31:04.357 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:04.357 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:04.357 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:04.357 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:31:04.357 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:31:04.357 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:04.357 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:31:04.357 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:04.357 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:04.357 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.357 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:04.357 14:29:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.270 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:06.270 00:31:06.270 real 0m20.187s 00:31:06.270 user 0m23.350s 00:31:06.270 sys 0m7.227s 00:31:06.270 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:06.270 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:06.270 ************************************ 00:31:06.270 END TEST nvmf_host_discovery 00:31:06.270 ************************************ 00:31:06.270 14:29:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:06.270 14:29:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:06.270 14:29:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:06.270 14:29:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:06.270 ************************************ 00:31:06.270 START TEST nvmf_host_multipath_status 00:31:06.270 ************************************ 00:31:06.271 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:06.533 * Looking for test storage... 00:31:06.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:06.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.533 --rc genhtml_branch_coverage=1 00:31:06.533 --rc genhtml_function_coverage=1 00:31:06.533 --rc genhtml_legend=1 00:31:06.533 --rc geninfo_all_blocks=1 00:31:06.533 --rc geninfo_unexecuted_blocks=1 00:31:06.533 00:31:06.533 ' 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:06.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.533 --rc genhtml_branch_coverage=1 00:31:06.533 --rc genhtml_function_coverage=1 00:31:06.533 --rc genhtml_legend=1 00:31:06.533 --rc geninfo_all_blocks=1 00:31:06.533 --rc geninfo_unexecuted_blocks=1 00:31:06.533 00:31:06.533 ' 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:06.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.533 --rc genhtml_branch_coverage=1 00:31:06.533 --rc genhtml_function_coverage=1 00:31:06.533 --rc genhtml_legend=1 00:31:06.533 --rc geninfo_all_blocks=1 00:31:06.533 --rc geninfo_unexecuted_blocks=1 00:31:06.533 00:31:06.533 ' 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:06.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:06.533 --rc genhtml_branch_coverage=1 00:31:06.533 --rc genhtml_function_coverage=1 00:31:06.533 --rc genhtml_legend=1 00:31:06.533 --rc geninfo_all_blocks=1 00:31:06.533 --rc geninfo_unexecuted_blocks=1 00:31:06.533 00:31:06.533 ' 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.533 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:06.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:31:06.534 14:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:14.676 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:14.676 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:31:14.676 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:14.676 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:14.676 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:14.676 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:14.676 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:14.677 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:14.677 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:14.677 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:14.677 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:14.677 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:14.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:14.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:31:14.678 00:31:14.678 --- 10.0.0.2 ping statistics --- 00:31:14.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:14.678 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:14.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:14.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:31:14.678 00:31:14.678 --- 10.0.0.1 ping statistics --- 00:31:14.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:14.678 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3563514 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3563514 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3563514 ']' 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:14.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:14.678 14:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:14.678 [2024-11-25 14:29:18.884789] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:31:14.678 [2024-11-25 14:29:18.884836] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:14.678 [2024-11-25 14:29:18.978470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:14.678 [2024-11-25 14:29:19.012884] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:14.678 [2024-11-25 14:29:19.012916] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:14.678 [2024-11-25 14:29:19.012923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:14.678 [2024-11-25 14:29:19.012930] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:14.678 [2024-11-25 14:29:19.012936] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:14.678 [2024-11-25 14:29:19.013994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:14.678 [2024-11-25 14:29:19.014006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.678 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:14.678 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:31:14.678 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:14.678 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:14.678 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:14.678 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:14.678 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3563514 00:31:14.678 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:14.940 [2024-11-25 14:29:19.882141] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.940 14:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:15.200 Malloc0 00:31:15.200 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:15.200 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:15.461 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:15.722 [2024-11-25 14:29:20.579586] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:15.722 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:15.722 [2024-11-25 14:29:20.764044] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:15.722 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3563879 00:31:15.722 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:15.722 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:15.722 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3563879 /var/tmp/bdevperf.sock 00:31:15.722 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3563879 ']' 00:31:15.722 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:15.722 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:15.722 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:15.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:15.722 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:15.722 14:29:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:16.664 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:16.664 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:31:16.664 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:16.924 14:29:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:17.184 Nvme0n1 00:31:17.185 14:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:17.754 Nvme0n1 00:31:17.754 14:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:17.754 14:29:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:19.664 14:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:19.664 14:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:19.925 14:29:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:20.186 14:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:21.127 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:21.127 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:21.127 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.127 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:21.387 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.387 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:21.387 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.387 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:21.387 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:21.387 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:21.387 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.387 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:21.647 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.647 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:21.647 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.648 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:21.908 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.908 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:21.908 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.908 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:21.908 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.908 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:21.908 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.908 14:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:22.169 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:22.169 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:22.169 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:22.430 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:22.692 14:29:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:23.634 14:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:23.634 14:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:23.634 14:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.634 14:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:23.897 14:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:23.897 14:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:23.897 14:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.897 14:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:23.897 14:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:23.897 14:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:23.897 14:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.897 14:29:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:24.158 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.158 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:24.158 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.158 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:24.418 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.418 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:24.418 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:24.418 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.418 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.418 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:24.418 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.418 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:24.679 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.679 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:24.679 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:24.941 14:29:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:24.941 14:29:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:26.325 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:26.325 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:26.325 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.325 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:26.325 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.325 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:26.325 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.325 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:26.325 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:26.325 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:26.325 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.325 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:26.586 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.586 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:26.586 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.586 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:26.846 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.846 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:26.846 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.846 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:26.846 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.846 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:26.846 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.846 14:29:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:27.106 14:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.107 14:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:27.107 14:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:27.367 14:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:27.626 14:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:28.564 14:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:28.564 14:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:28.564 14:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.564 14:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:28.564 14:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.565 14:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:28.825 14:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.825 14:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:28.825 14:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:28.825 14:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:28.825 14:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.825 14:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:29.085 14:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.085 14:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:29.085 14:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.085 14:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:29.345 14:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.345 14:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:29.345 14:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.345 14:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:29.345 14:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.345 14:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:29.345 14:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:29.345 14:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.604 14:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:29.604 14:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:29.604 14:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:29.863 14:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:29.863 14:29:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:31.244 14:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:31.244 14:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:31.244 14:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.244 14:29:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:31.244 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:31.244 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:31.244 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.244 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:31.244 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:31.244 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:31.244 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.244 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:31.504 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.504 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:31.504 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.504 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:31.765 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.765 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:31.765 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.765 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:32.025 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:32.025 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:32.025 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.025 14:29:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:32.025 14:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:32.025 14:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:32.025 14:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:32.285 14:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:32.285 14:29:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:33.684 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:33.684 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:33.684 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.684 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:33.684 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:33.684 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:33.684 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.684 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:33.684 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.684 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:33.684 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.684 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:33.945 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.945 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:33.945 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.945 14:29:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:33.945 14:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.945 14:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:33.945 14:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.945 14:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:34.206 14:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:34.206 14:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:34.206 14:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.206 14:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:34.466 14:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.466 14:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:34.725 14:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:34.725 14:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:34.725 14:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:34.986 14:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:35.928 14:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:35.928 14:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:35.928 14:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.928 14:29:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:36.189 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.189 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:36.189 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.189 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:36.451 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.451 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:36.451 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.451 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:36.451 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.451 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:36.451 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.451 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:36.711 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.711 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:36.711 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.711 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:36.972 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.972 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:36.972 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.972 14:29:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:36.972 14:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.972 14:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:36.972 14:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:37.232 14:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:37.493 14:29:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:38.433 14:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:38.433 14:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:38.433 14:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.433 14:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:38.693 14:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:38.693 14:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:38.693 14:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.693 14:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:38.954 14:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.954 14:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:38.954 14:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.954 14:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:38.954 14:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.954 14:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:38.954 14:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.954 14:29:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:39.214 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:39.214 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:39.214 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:39.214 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:39.475 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:39.475 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:39.475 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:39.475 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:39.475 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:39.475 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:39.475 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:39.735 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:39.995 14:29:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:40.935 14:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:40.935 14:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:40.935 14:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.935 14:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:41.196 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.196 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:41.196 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.196 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:41.196 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.196 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:41.196 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.196 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:41.456 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.456 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:41.456 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.456 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:41.716 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.716 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:41.716 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.716 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:41.716 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.716 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:41.716 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.716 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:41.977 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.977 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:41.977 14:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:42.238 14:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:42.498 14:29:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:43.439 14:29:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:43.439 14:29:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:43.439 14:29:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.439 14:29:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:43.699 14:29:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.699 14:29:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:43.699 14:29:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.699 14:29:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:43.699 14:29:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:43.699 14:29:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:43.699 14:29:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.699 14:29:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:43.960 14:29:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.960 14:29:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:43.960 14:29:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.960 14:29:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:44.220 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.220 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:44.220 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.220 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:44.220 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.220 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:44.220 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.220 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:44.480 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:44.480 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3563879 00:31:44.480 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3563879 ']' 00:31:44.480 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3563879 00:31:44.480 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:31:44.480 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:44.480 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3563879 00:31:44.480 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:31:44.480 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:31:44.480 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3563879' 00:31:44.480 killing process with pid 3563879 00:31:44.480 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3563879 00:31:44.480 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3563879 00:31:44.480 { 00:31:44.481 "results": [ 00:31:44.481 { 00:31:44.481 "job": "Nvme0n1", 00:31:44.481 "core_mask": "0x4", 00:31:44.481 "workload": "verify", 00:31:44.481 "status": "terminated", 00:31:44.481 "verify_range": { 00:31:44.481 "start": 0, 00:31:44.481 "length": 16384 00:31:44.481 }, 00:31:44.481 "queue_depth": 128, 00:31:44.481 "io_size": 4096, 00:31:44.481 "runtime": 26.663089, 00:31:44.481 "iops": 12019.912621527086, 00:31:44.481 "mibps": 46.95278367784018, 00:31:44.481 "io_failed": 0, 00:31:44.481 "io_timeout": 0, 00:31:44.481 "avg_latency_us": 10629.913001339624, 00:31:44.481 "min_latency_us": 361.81333333333333, 00:31:44.481 "max_latency_us": 3019898.88 00:31:44.481 } 00:31:44.481 ], 00:31:44.481 "core_count": 1 00:31:44.481 } 00:31:44.762 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3563879 00:31:44.762 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:44.762 [2024-11-25 14:29:20.843215] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:31:44.762 [2024-11-25 14:29:20.843274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3563879 ] 00:31:44.762 [2024-11-25 14:29:20.930044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:44.762 [2024-11-25 14:29:20.965414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:44.762 Running I/O for 90 seconds... 00:31:44.762 10441.00 IOPS, 40.79 MiB/s [2024-11-25T13:29:49.852Z] 11379.00 IOPS, 44.45 MiB/s [2024-11-25T13:29:49.852Z] 11889.33 IOPS, 46.44 MiB/s [2024-11-25T13:29:49.852Z] 12152.50 IOPS, 47.47 MiB/s [2024-11-25T13:29:49.852Z] 12307.80 IOPS, 48.08 MiB/s [2024-11-25T13:29:49.852Z] 12390.33 IOPS, 48.40 MiB/s [2024-11-25T13:29:49.852Z] 12442.71 IOPS, 48.60 MiB/s [2024-11-25T13:29:49.852Z] 12464.50 IOPS, 48.69 MiB/s [2024-11-25T13:29:49.852Z] 12511.22 IOPS, 48.87 MiB/s [2024-11-25T13:29:49.852Z] 12541.00 IOPS, 48.99 MiB/s [2024-11-25T13:29:49.852Z] 12572.09 IOPS, 49.11 MiB/s [2024-11-25T13:29:49.852Z] [2024-11-25 14:29:34.740333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.762 [2024-11-25 14:29:34.740367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:44.762 [2024-11-25 14:29:34.740399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.762 [2024-11-25 14:29:34.740407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:44.762 [2024-11-25 14:29:34.740417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.762 [2024-11-25 14:29:34.740423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:44.762 [2024-11-25 14:29:34.740434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.762 [2024-11-25 14:29:34.740439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:44.762 [2024-11-25 14:29:34.740450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.762 [2024-11-25 14:29:34.740455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:44.762 [2024-11-25 14:29:34.740465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.762 [2024-11-25 14:29:34.740470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:44.762 [2024-11-25 14:29:34.740481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.762 [2024-11-25 14:29:34.740486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:44.762 [2024-11-25 14:29:34.740496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.762 [2024-11-25 14:29:34.740501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:44.762 [2024-11-25 14:29:34.740511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.762 [2024-11-25 14:29:34.740517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:44.762 [2024-11-25 14:29:34.740527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.762 [2024-11-25 14:29:34.740538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:44.762 [2024-11-25 14:29:34.740548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.762 [2024-11-25 14:29:34.740554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:44.762 [2024-11-25 14:29:34.740564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.762 [2024-11-25 14:29:34.740569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:44.762 [2024-11-25 14:29:34.740579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.762 [2024-11-25 14:29:34.740585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:44.762 [2024-11-25 14:29:34.740595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.762 [2024-11-25 14:29:34.740600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:44.762 [2024-11-25 14:29:34.740610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.762 [2024-11-25 14:29:34.740616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:44.762 [2024-11-25 14:29:34.740626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.762 [2024-11-25 14:29:34.740631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:44.762 [2024-11-25 14:29:34.740641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.762 [2024-11-25 14:29:34.740647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:44.762 [2024-11-25 14:29:34.740657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.762 [2024-11-25 14:29:34.740662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:44.762 [2024-11-25 14:29:34.740673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.762 [2024-11-25 14:29:34.740678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:44.762 [2024-11-25 14:29:34.740688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.762 [2024-11-25 14:29:34.740694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:44.762 [2024-11-25 14:29:34.740704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.762 [2024-11-25 14:29:34.740710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:44.762 [2024-11-25 14:29:34.740720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.762 [2024-11-25 14:29:34.740726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:44.762 [2024-11-25 14:29:34.740738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.762 [2024-11-25 14:29:34.740743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.740753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.740759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.740769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.740775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.740785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.740791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.740801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.740806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.740817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.740822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.740832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.740837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.740848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.740854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.740962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.763 [2024-11-25 14:29:34.740969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.740982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.740987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.740999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:20816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.763 [2024-11-25 14:29:34.741574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:44.763 [2024-11-25 14:29:34.741587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.741592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.741605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.741610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.741623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.741629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.741641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.741647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.741660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.741665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.741678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.741683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.741697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.741702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.741715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.741720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.741733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.741739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.741780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.741787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.741805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.741811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.741824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.741830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.741843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.741849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.741863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.741868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.741881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.741886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.741900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.741905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.741919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.741928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.741942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.741948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.741961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.741966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.741980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.741985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.741998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.742004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.742018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.742023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.742038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.742043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.742057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.742062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.742076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.742081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.742124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.742130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.742145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.742151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.742168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.742174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.742189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.742194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.742208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.742214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.742228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.742234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.742248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.742253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.742267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.742273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.742306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.742312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.742328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.742335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.742350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.742355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:44.764 [2024-11-25 14:29:34.742370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.764 [2024-11-25 14:29:34.742375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.742986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.742991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.743007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.743012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.743047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.743053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:34.743070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:34.743076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:44.765 12500.67 IOPS, 48.83 MiB/s [2024-11-25T13:29:49.855Z] 11539.08 IOPS, 45.07 MiB/s [2024-11-25T13:29:49.855Z] 10714.86 IOPS, 41.85 MiB/s [2024-11-25T13:29:49.855Z] 10077.87 IOPS, 39.37 MiB/s [2024-11-25T13:29:49.855Z] 10242.31 IOPS, 40.01 MiB/s [2024-11-25T13:29:49.855Z] 10413.94 IOPS, 40.68 MiB/s [2024-11-25T13:29:49.855Z] 10774.17 IOPS, 42.09 MiB/s [2024-11-25T13:29:49.855Z] 11085.68 IOPS, 43.30 MiB/s [2024-11-25T13:29:49.855Z] 11254.70 IOPS, 43.96 MiB/s [2024-11-25T13:29:49.855Z] 11324.38 IOPS, 44.24 MiB/s [2024-11-25T13:29:49.855Z] 11392.18 IOPS, 44.50 MiB/s [2024-11-25T13:29:49.855Z] 11635.57 IOPS, 45.45 MiB/s [2024-11-25T13:29:49.855Z] 11851.88 IOPS, 46.30 MiB/s [2024-11-25T13:29:49.855Z] [2024-11-25 14:29:47.316667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:47.316703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:47.316721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:47.316727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:47.316739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:47.316744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:47.316973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.765 [2024-11-25 14:29:47.316980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:44.765 [2024-11-25 14:29:47.316991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.316996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.317007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.317012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.317023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.317028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.317039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.317045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.317055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:109760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.317060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.317070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.317075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.317086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:109792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.317091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.317102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.317108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.318145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.318163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.318175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.318180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.318191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.318196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.318209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.318215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.318225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.318231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.318241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.318246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.318257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:109920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.318262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.318272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.318277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.318288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.318293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.318303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.318308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.318319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.318324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.318334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.318340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.318350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.318355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.318365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.318371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.318382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.318387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.318398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.318404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.318414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.318419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.318430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.318435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.318445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.766 [2024-11-25 14:29:47.318450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.318461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.766 [2024-11-25 14:29:47.318466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.318477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.318482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.318492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.766 [2024-11-25 14:29:47.318497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:44.766 [2024-11-25 14:29:47.318508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.318513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.318524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.318529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.318539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.318544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.318555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.318560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.318570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.318575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.318586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.318593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.318603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.318608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.318618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.318623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.318634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.318639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.318650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.318656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.318666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.318671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.318681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.318687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.318697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.318702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.318713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.318718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.318729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.318734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.318744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.318750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.318760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.318765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.319120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.319130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.319141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.319147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.319157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.319168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.319179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.319184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.319195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.319200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.319210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.319216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.319226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.319231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.319242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.319248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.319258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.319263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.319273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.319279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.319289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.319295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.319305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.319310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.319321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.319326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.319337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.767 [2024-11-25 14:29:47.319343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.319353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.767 [2024-11-25 14:29:47.319358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.319368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.767 [2024-11-25 14:29:47.319374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.319384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.767 [2024-11-25 14:29:47.319390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.319530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.767 [2024-11-25 14:29:47.319537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.319548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.767 [2024-11-25 14:29:47.319554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.319564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.767 [2024-11-25 14:29:47.319569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.319579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.767 [2024-11-25 14:29:47.319584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:44.767 [2024-11-25 14:29:47.319595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.767 [2024-11-25 14:29:47.319600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.319610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.768 [2024-11-25 14:29:47.319615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.319626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.768 [2024-11-25 14:29:47.319631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.319641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.768 [2024-11-25 14:29:47.319647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.319659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.768 [2024-11-25 14:29:47.319664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.319675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.768 [2024-11-25 14:29:47.319680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.319690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.768 [2024-11-25 14:29:47.319695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.319706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.768 [2024-11-25 14:29:47.319711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.319722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.768 [2024-11-25 14:29:47.319727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.319737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.768 [2024-11-25 14:29:47.319742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.319752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:109680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.768 [2024-11-25 14:29:47.319758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.319768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.768 [2024-11-25 14:29:47.319773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.319783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.768 [2024-11-25 14:29:47.319788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.319799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.768 [2024-11-25 14:29:47.319804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.319815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:109808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.768 [2024-11-25 14:29:47.319820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.319961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.768 [2024-11-25 14:29:47.319969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.319980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.768 [2024-11-25 14:29:47.319987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.319997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.768 [2024-11-25 14:29:47.320003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.320013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.768 [2024-11-25 14:29:47.320018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.320029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.768 [2024-11-25 14:29:47.320034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.320044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.768 [2024-11-25 14:29:47.320049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.320060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.768 [2024-11-25 14:29:47.320065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.320075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.768 [2024-11-25 14:29:47.320080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.320090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:109840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.768 [2024-11-25 14:29:47.320096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.320106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.768 [2024-11-25 14:29:47.320111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.320121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.768 [2024-11-25 14:29:47.320126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.320136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.768 [2024-11-25 14:29:47.320142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.320152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.768 [2024-11-25 14:29:47.320162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.320173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.768 [2024-11-25 14:29:47.320180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.320717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.768 [2024-11-25 14:29:47.320728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.320740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.768 [2024-11-25 14:29:47.320746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.320756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.768 [2024-11-25 14:29:47.320761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.320771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.768 [2024-11-25 14:29:47.320777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.320787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.768 [2024-11-25 14:29:47.320792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.320803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.768 [2024-11-25 14:29:47.320808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.320818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.768 [2024-11-25 14:29:47.320823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.320834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.768 [2024-11-25 14:29:47.320839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.320849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.768 [2024-11-25 14:29:47.320854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:44.768 [2024-11-25 14:29:47.320865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.769 [2024-11-25 14:29:47.320870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.320880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.769 [2024-11-25 14:29:47.320886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.320896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.769 [2024-11-25 14:29:47.320904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.320914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.769 [2024-11-25 14:29:47.320919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.320930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.769 [2024-11-25 14:29:47.320935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.320945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.769 [2024-11-25 14:29:47.320951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.320961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.769 [2024-11-25 14:29:47.320966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.320977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.769 [2024-11-25 14:29:47.320982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.321882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.769 [2024-11-25 14:29:47.321893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.321905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.769 [2024-11-25 14:29:47.321911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.321921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.769 [2024-11-25 14:29:47.321927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.321937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.769 [2024-11-25 14:29:47.321942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.321952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.769 [2024-11-25 14:29:47.321958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.321968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:109880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.769 [2024-11-25 14:29:47.321973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.321984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.769 [2024-11-25 14:29:47.321989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.322004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.769 [2024-11-25 14:29:47.322009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.322020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.769 [2024-11-25 14:29:47.322025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.322036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.769 [2024-11-25 14:29:47.322041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.322051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.769 [2024-11-25 14:29:47.322056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.322066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.769 [2024-11-25 14:29:47.322072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.322082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:109776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.769 [2024-11-25 14:29:47.322087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.322097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.769 [2024-11-25 14:29:47.322102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.322113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.769 [2024-11-25 14:29:47.322118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.322129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.769 [2024-11-25 14:29:47.322134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.322144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.769 [2024-11-25 14:29:47.322149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.322163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.769 [2024-11-25 14:29:47.322169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.322180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.769 [2024-11-25 14:29:47.322185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.322197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.769 [2024-11-25 14:29:47.322203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.322213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.769 [2024-11-25 14:29:47.322218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.322229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.769 [2024-11-25 14:29:47.322235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.322245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.769 [2024-11-25 14:29:47.322251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.322261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.769 [2024-11-25 14:29:47.322266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.322276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.769 [2024-11-25 14:29:47.322282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.322292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.769 [2024-11-25 14:29:47.322297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.322308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:109936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.769 [2024-11-25 14:29:47.322313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.322323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.769 [2024-11-25 14:29:47.322328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.322339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.769 [2024-11-25 14:29:47.322344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.322354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.769 [2024-11-25 14:29:47.322359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:44.769 [2024-11-25 14:29:47.322370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.769 [2024-11-25 14:29:47.322375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.322385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.770 [2024-11-25 14:29:47.322392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.322402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.770 [2024-11-25 14:29:47.322407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.322417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.770 [2024-11-25 14:29:47.322423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.322433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.770 [2024-11-25 14:29:47.322438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.322448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.770 [2024-11-25 14:29:47.322453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.322464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.770 [2024-11-25 14:29:47.322469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.322479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.770 [2024-11-25 14:29:47.322484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.322495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.770 [2024-11-25 14:29:47.322500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.770 [2024-11-25 14:29:47.324355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.770 [2024-11-25 14:29:47.324373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.770 [2024-11-25 14:29:47.324389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.770 [2024-11-25 14:29:47.324404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.770 [2024-11-25 14:29:47.324423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.770 [2024-11-25 14:29:47.324439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.770 [2024-11-25 14:29:47.324454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.770 [2024-11-25 14:29:47.324469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.770 [2024-11-25 14:29:47.324485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.770 [2024-11-25 14:29:47.324501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.770 [2024-11-25 14:29:47.324516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.770 [2024-11-25 14:29:47.324532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.770 [2024-11-25 14:29:47.324547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.770 [2024-11-25 14:29:47.324562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.770 [2024-11-25 14:29:47.324577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.770 [2024-11-25 14:29:47.324593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.770 [2024-11-25 14:29:47.324610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.770 [2024-11-25 14:29:47.324625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.770 [2024-11-25 14:29:47.324641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.770 [2024-11-25 14:29:47.324656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.770 [2024-11-25 14:29:47.324672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.770 [2024-11-25 14:29:47.324688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.770 [2024-11-25 14:29:47.324704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.770 [2024-11-25 14:29:47.324719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.770 [2024-11-25 14:29:47.324734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.770 [2024-11-25 14:29:47.324750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:44.770 [2024-11-25 14:29:47.324760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.770 [2024-11-25 14:29:47.324765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.324776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.771 [2024-11-25 14:29:47.324781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.324791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.771 [2024-11-25 14:29:47.324796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.324808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.771 [2024-11-25 14:29:47.324813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.324823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.771 [2024-11-25 14:29:47.324828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.324838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.771 [2024-11-25 14:29:47.324844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.324854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.771 [2024-11-25 14:29:47.324859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.324869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.771 [2024-11-25 14:29:47.324874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.324884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.771 [2024-11-25 14:29:47.324889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.324899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.771 [2024-11-25 14:29:47.324905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.324915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.771 [2024-11-25 14:29:47.324920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.324930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.771 [2024-11-25 14:29:47.324935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.324945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.771 [2024-11-25 14:29:47.324950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.324960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.771 [2024-11-25 14:29:47.324966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.324976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.771 [2024-11-25 14:29:47.324981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.324992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.771 [2024-11-25 14:29:47.324997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.325007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.771 [2024-11-25 14:29:47.325013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.325023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.771 [2024-11-25 14:29:47.325028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.325038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.771 [2024-11-25 14:29:47.325043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.325053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.771 [2024-11-25 14:29:47.325058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.325069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.771 [2024-11-25 14:29:47.325075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.325539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.771 [2024-11-25 14:29:47.325550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.325561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.771 [2024-11-25 14:29:47.325567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.325577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.771 [2024-11-25 14:29:47.325583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.325593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.771 [2024-11-25 14:29:47.325599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.325609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.771 [2024-11-25 14:29:47.325614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.325624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.771 [2024-11-25 14:29:47.325633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.325644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.771 [2024-11-25 14:29:47.325651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.325662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.771 [2024-11-25 14:29:47.325667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.325677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.771 [2024-11-25 14:29:47.325682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.325692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.771 [2024-11-25 14:29:47.325698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.325708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.771 [2024-11-25 14:29:47.325713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.325724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.771 [2024-11-25 14:29:47.325729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:44.771 [2024-11-25 14:29:47.325739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.771 [2024-11-25 14:29:47.325744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.336198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.772 [2024-11-25 14:29:47.336224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.336240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.772 [2024-11-25 14:29:47.336247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.336261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.772 [2024-11-25 14:29:47.336268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.336283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.772 [2024-11-25 14:29:47.336290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.772 [2024-11-25 14:29:47.337039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.772 [2024-11-25 14:29:47.337074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.772 [2024-11-25 14:29:47.337096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.772 [2024-11-25 14:29:47.337117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.772 [2024-11-25 14:29:47.337138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.772 [2024-11-25 14:29:47.337167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.772 [2024-11-25 14:29:47.337189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.772 [2024-11-25 14:29:47.337210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.772 [2024-11-25 14:29:47.337231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.772 [2024-11-25 14:29:47.337252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.772 [2024-11-25 14:29:47.337272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.772 [2024-11-25 14:29:47.337293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.772 [2024-11-25 14:29:47.337314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.772 [2024-11-25 14:29:47.337337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.772 [2024-11-25 14:29:47.337359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.772 [2024-11-25 14:29:47.337380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.772 [2024-11-25 14:29:47.337401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.772 [2024-11-25 14:29:47.337421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:109872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.772 [2024-11-25 14:29:47.337442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.772 [2024-11-25 14:29:47.337463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.772 [2024-11-25 14:29:47.337484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.772 [2024-11-25 14:29:47.337505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.772 [2024-11-25 14:29:47.337526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.772 [2024-11-25 14:29:47.337547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.772 [2024-11-25 14:29:47.337567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.772 [2024-11-25 14:29:47.337589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:110776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.772 [2024-11-25 14:29:47.337978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.337993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.772 [2024-11-25 14:29:47.338001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.338015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.772 [2024-11-25 14:29:47.338022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.338035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.772 [2024-11-25 14:29:47.338042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.338056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.772 [2024-11-25 14:29:47.338063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.338077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.772 [2024-11-25 14:29:47.338084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.338098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.772 [2024-11-25 14:29:47.338105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:44.772 [2024-11-25 14:29:47.338118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.773 [2024-11-25 14:29:47.338125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.338139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.773 [2024-11-25 14:29:47.338146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.338168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.773 [2024-11-25 14:29:47.338175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.338189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.773 [2024-11-25 14:29:47.338196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.339801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.773 [2024-11-25 14:29:47.339816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.339835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.773 [2024-11-25 14:29:47.339842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.339857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.773 [2024-11-25 14:29:47.339863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.339877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.773 [2024-11-25 14:29:47.339884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.339898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.773 [2024-11-25 14:29:47.339905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.339919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.773 [2024-11-25 14:29:47.339926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.339940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.773 [2024-11-25 14:29:47.339947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.339960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.773 [2024-11-25 14:29:47.339967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.339981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.773 [2024-11-25 14:29:47.339989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.340002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.773 [2024-11-25 14:29:47.340010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.340024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.773 [2024-11-25 14:29:47.340031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.340045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.773 [2024-11-25 14:29:47.340052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.340065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.773 [2024-11-25 14:29:47.340072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.340086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.773 [2024-11-25 14:29:47.340095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.340108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:109880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.773 [2024-11-25 14:29:47.340115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.340129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.773 [2024-11-25 14:29:47.340136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.340150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.773 [2024-11-25 14:29:47.340156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.340175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.773 [2024-11-25 14:29:47.340182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.340196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.773 [2024-11-25 14:29:47.340203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.340217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.773 [2024-11-25 14:29:47.340223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.340237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.773 [2024-11-25 14:29:47.340244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.340258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.773 [2024-11-25 14:29:47.340265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.340278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.773 [2024-11-25 14:29:47.340285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.340299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.773 [2024-11-25 14:29:47.340306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.340319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.773 [2024-11-25 14:29:47.340326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.340340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.773 [2024-11-25 14:29:47.340349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.341649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.773 [2024-11-25 14:29:47.341663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.341678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.773 [2024-11-25 14:29:47.341686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.341700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.773 [2024-11-25 14:29:47.341707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.341721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.773 [2024-11-25 14:29:47.341728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:44.773 [2024-11-25 14:29:47.341742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.773 [2024-11-25 14:29:47.341748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.341762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.774 [2024-11-25 14:29:47.341769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.341783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.774 [2024-11-25 14:29:47.341790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.341804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.774 [2024-11-25 14:29:47.341810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.341824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.774 [2024-11-25 14:29:47.341831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.341844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.774 [2024-11-25 14:29:47.341851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.341867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.774 [2024-11-25 14:29:47.341875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.341890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.774 [2024-11-25 14:29:47.341901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.341915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.774 [2024-11-25 14:29:47.341922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.341936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.774 [2024-11-25 14:29:47.341942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.341956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.774 [2024-11-25 14:29:47.341963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.341977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.774 [2024-11-25 14:29:47.341983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.341997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.774 [2024-11-25 14:29:47.342004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.342017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.774 [2024-11-25 14:29:47.342024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.342038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.774 [2024-11-25 14:29:47.342045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.342058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.774 [2024-11-25 14:29:47.342065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.342079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.774 [2024-11-25 14:29:47.342086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.342100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.774 [2024-11-25 14:29:47.342107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.342121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.774 [2024-11-25 14:29:47.342128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.342141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.774 [2024-11-25 14:29:47.342148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.342170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.774 [2024-11-25 14:29:47.342177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.342191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.774 [2024-11-25 14:29:47.342198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.342212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.774 [2024-11-25 14:29:47.342219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.342233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.774 [2024-11-25 14:29:47.342239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.342253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.774 [2024-11-25 14:29:47.342260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.342274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.774 [2024-11-25 14:29:47.342281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.342294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.774 [2024-11-25 14:29:47.342301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.342315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.774 [2024-11-25 14:29:47.342321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.342336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.774 [2024-11-25 14:29:47.342343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.342357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.774 [2024-11-25 14:29:47.342364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.342377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.774 [2024-11-25 14:29:47.342384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.342398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.774 [2024-11-25 14:29:47.342405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.344298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.774 [2024-11-25 14:29:47.344315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.344331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.774 [2024-11-25 14:29:47.344339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.344353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.774 [2024-11-25 14:29:47.344361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.344375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.774 [2024-11-25 14:29:47.344383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.344397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.774 [2024-11-25 14:29:47.344404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.344418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.774 [2024-11-25 14:29:47.344425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.344439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.774 [2024-11-25 14:29:47.344446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:44.774 [2024-11-25 14:29:47.344459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.774 [2024-11-25 14:29:47.344467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.344481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.775 [2024-11-25 14:29:47.344487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.344501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.775 [2024-11-25 14:29:47.344508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.344522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.775 [2024-11-25 14:29:47.344529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.344542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.775 [2024-11-25 14:29:47.344549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.344566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.775 [2024-11-25 14:29:47.344573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.344587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.775 [2024-11-25 14:29:47.344595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.344608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.775 [2024-11-25 14:29:47.344615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.344629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.775 [2024-11-25 14:29:47.344636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.344649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.775 [2024-11-25 14:29:47.344656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.344670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.775 [2024-11-25 14:29:47.344677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.344691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.775 [2024-11-25 14:29:47.344698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.344711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.775 [2024-11-25 14:29:47.344718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.344732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:111224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.775 [2024-11-25 14:29:47.344739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.344753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.775 [2024-11-25 14:29:47.344760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.344773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.775 [2024-11-25 14:29:47.344780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.344794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.775 [2024-11-25 14:29:47.344801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.344815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.775 [2024-11-25 14:29:47.344823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.344837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.775 [2024-11-25 14:29:47.344843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.344857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.775 [2024-11-25 14:29:47.344864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.344878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.775 [2024-11-25 14:29:47.344885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.344899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.775 [2024-11-25 14:29:47.344906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.344920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.775 [2024-11-25 14:29:47.344926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.344940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.775 [2024-11-25 14:29:47.344947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.344961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.775 [2024-11-25 14:29:47.344968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.344981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.775 [2024-11-25 14:29:47.344988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.345002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.775 [2024-11-25 14:29:47.345009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.345023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.775 [2024-11-25 14:29:47.345030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.345548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.775 [2024-11-25 14:29:47.345561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.345576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.775 [2024-11-25 14:29:47.345589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.345603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.775 [2024-11-25 14:29:47.345610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.345624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.775 [2024-11-25 14:29:47.345631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.345645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.775 [2024-11-25 14:29:47.345653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.345667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.775 [2024-11-25 14:29:47.345674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.345688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.775 [2024-11-25 14:29:47.345696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.345710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.775 [2024-11-25 14:29:47.345717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.345732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.775 [2024-11-25 14:29:47.345739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.345752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.775 [2024-11-25 14:29:47.345761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.345775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.775 [2024-11-25 14:29:47.345782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:44.775 [2024-11-25 14:29:47.345796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.776 [2024-11-25 14:29:47.345803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.345817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.776 [2024-11-25 14:29:47.345824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.345839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.776 [2024-11-25 14:29:47.345846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.776 [2024-11-25 14:29:47.347287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.776 [2024-11-25 14:29:47.347310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.776 [2024-11-25 14:29:47.347331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.776 [2024-11-25 14:29:47.347352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.776 [2024-11-25 14:29:47.347372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.776 [2024-11-25 14:29:47.347393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.776 [2024-11-25 14:29:47.347413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.776 [2024-11-25 14:29:47.347434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.776 [2024-11-25 14:29:47.347455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.776 [2024-11-25 14:29:47.347476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.776 [2024-11-25 14:29:47.347497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.776 [2024-11-25 14:29:47.347517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.776 [2024-11-25 14:29:47.347541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.776 [2024-11-25 14:29:47.347562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.776 [2024-11-25 14:29:47.347583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.776 [2024-11-25 14:29:47.347603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.776 [2024-11-25 14:29:47.347624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.776 [2024-11-25 14:29:47.347644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.776 [2024-11-25 14:29:47.347665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.776 [2024-11-25 14:29:47.347685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.776 [2024-11-25 14:29:47.347706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.776 [2024-11-25 14:29:47.347726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.776 [2024-11-25 14:29:47.347747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.776 [2024-11-25 14:29:47.347767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.776 [2024-11-25 14:29:47.347789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.776 [2024-11-25 14:29:47.347810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.776 [2024-11-25 14:29:47.347830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.776 [2024-11-25 14:29:47.347851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.776 [2024-11-25 14:29:47.347871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.776 [2024-11-25 14:29:47.347892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.347906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.776 [2024-11-25 14:29:47.347913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:44.776 [2024-11-25 14:29:47.348959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.776 [2024-11-25 14:29:47.348970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.348982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.777 [2024-11-25 14:29:47.348987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.348998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.777 [2024-11-25 14:29:47.349004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.777 [2024-11-25 14:29:47.349021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.777 [2024-11-25 14:29:47.349038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.777 [2024-11-25 14:29:47.349057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.777 [2024-11-25 14:29:47.349073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.777 [2024-11-25 14:29:47.349089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.777 [2024-11-25 14:29:47.349106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.777 [2024-11-25 14:29:47.349122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.777 [2024-11-25 14:29:47.349138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.777 [2024-11-25 14:29:47.349155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.777 [2024-11-25 14:29:47.349176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.777 [2024-11-25 14:29:47.349192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.777 [2024-11-25 14:29:47.349209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.777 [2024-11-25 14:29:47.349225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.777 [2024-11-25 14:29:47.349241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.777 [2024-11-25 14:29:47.349259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.777 [2024-11-25 14:29:47.349275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.777 [2024-11-25 14:29:47.349291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.777 [2024-11-25 14:29:47.349308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.777 [2024-11-25 14:29:47.349324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.777 [2024-11-25 14:29:47.349340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.777 [2024-11-25 14:29:47.349356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.777 [2024-11-25 14:29:47.349373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.777 [2024-11-25 14:29:47.349389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.777 [2024-11-25 14:29:47.349405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.777 [2024-11-25 14:29:47.349895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.777 [2024-11-25 14:29:47.349913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.777 [2024-11-25 14:29:47.349930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.777 [2024-11-25 14:29:47.349949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.777 [2024-11-25 14:29:47.349965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.777 [2024-11-25 14:29:47.349982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.349992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.777 [2024-11-25 14:29:47.349998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.350009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.777 [2024-11-25 14:29:47.350014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.350904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:111472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.777 [2024-11-25 14:29:47.350915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.350927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.777 [2024-11-25 14:29:47.350933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.350944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:111536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.777 [2024-11-25 14:29:47.350949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:44.777 [2024-11-25 14:29:47.350960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.778 [2024-11-25 14:29:47.350966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.350976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.778 [2024-11-25 14:29:47.350982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.350993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.778 [2024-11-25 14:29:47.350999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.351009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:111816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.778 [2024-11-25 14:29:47.351015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.351028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.778 [2024-11-25 14:29:47.351033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.351044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.778 [2024-11-25 14:29:47.351049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.351060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.778 [2024-11-25 14:29:47.351066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.351077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.778 [2024-11-25 14:29:47.351082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.351093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:111888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.778 [2024-11-25 14:29:47.351098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.351109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.778 [2024-11-25 14:29:47.351115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.351125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.778 [2024-11-25 14:29:47.351131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.351142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.778 [2024-11-25 14:29:47.351147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.351162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.778 [2024-11-25 14:29:47.351168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.351179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.778 [2024-11-25 14:29:47.351184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.351195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.778 [2024-11-25 14:29:47.351201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.351211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.778 [2024-11-25 14:29:47.351217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.351229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.778 [2024-11-25 14:29:47.351234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.351245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.778 [2024-11-25 14:29:47.351250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.351261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.778 [2024-11-25 14:29:47.351266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.351277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.778 [2024-11-25 14:29:47.351282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.351293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.778 [2024-11-25 14:29:47.351299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.351309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.778 [2024-11-25 14:29:47.351315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.351961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:111600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.778 [2024-11-25 14:29:47.351971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.351982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.778 [2024-11-25 14:29:47.351988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.351999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.778 [2024-11-25 14:29:47.352005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.352016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.778 [2024-11-25 14:29:47.352021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.352032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.778 [2024-11-25 14:29:47.352038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.352049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.778 [2024-11-25 14:29:47.352054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.352065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.778 [2024-11-25 14:29:47.352073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.352083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.778 [2024-11-25 14:29:47.352089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.352100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.778 [2024-11-25 14:29:47.352106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.352117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.778 [2024-11-25 14:29:47.352123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.352133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:111528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.778 [2024-11-25 14:29:47.352139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.352150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.778 [2024-11-25 14:29:47.352156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.352172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.778 [2024-11-25 14:29:47.352178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.352188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:111936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.778 [2024-11-25 14:29:47.352194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.352205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.778 [2024-11-25 14:29:47.352210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:44.778 [2024-11-25 14:29:47.352221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.779 [2024-11-25 14:29:47.352227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.352238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.779 [2024-11-25 14:29:47.352243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.352254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.779 [2024-11-25 14:29:47.352259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.352270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.779 [2024-11-25 14:29:47.352277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.352288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:112032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.779 [2024-11-25 14:29:47.352294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.779 [2024-11-25 14:29:47.353045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.779 [2024-11-25 14:29:47.353064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.779 [2024-11-25 14:29:47.353080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.779 [2024-11-25 14:29:47.353097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:111568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.779 [2024-11-25 14:29:47.353113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.779 [2024-11-25 14:29:47.353131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.779 [2024-11-25 14:29:47.353147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.779 [2024-11-25 14:29:47.353176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.779 [2024-11-25 14:29:47.353193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.779 [2024-11-25 14:29:47.353210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.779 [2024-11-25 14:29:47.353227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.779 [2024-11-25 14:29:47.353247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.779 [2024-11-25 14:29:47.353264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.779 [2024-11-25 14:29:47.353280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.779 [2024-11-25 14:29:47.353297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.779 [2024-11-25 14:29:47.353569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.779 [2024-11-25 14:29:47.353587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.779 [2024-11-25 14:29:47.353603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.779 [2024-11-25 14:29:47.353619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:112112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.779 [2024-11-25 14:29:47.353636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:111744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.779 [2024-11-25 14:29:47.353652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.779 [2024-11-25 14:29:47.353668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:111344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.779 [2024-11-25 14:29:47.353685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.779 [2024-11-25 14:29:47.353703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.779 [2024-11-25 14:29:47.353720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.779 [2024-11-25 14:29:47.353736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.779 [2024-11-25 14:29:47.353752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.779 [2024-11-25 14:29:47.353769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.779 [2024-11-25 14:29:47.353785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.779 [2024-11-25 14:29:47.353801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:111936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.779 [2024-11-25 14:29:47.353817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.779 [2024-11-25 14:29:47.353833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:44.779 [2024-11-25 14:29:47.353844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.780 [2024-11-25 14:29:47.353850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.353861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.780 [2024-11-25 14:29:47.353866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.354887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:111512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.780 [2024-11-25 14:29:47.354899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.354914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.780 [2024-11-25 14:29:47.354922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.354933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.780 [2024-11-25 14:29:47.354938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.354949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.780 [2024-11-25 14:29:47.354955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.354965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.780 [2024-11-25 14:29:47.354971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.354982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.780 [2024-11-25 14:29:47.354988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.354999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.780 [2024-11-25 14:29:47.355004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.355015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.780 [2024-11-25 14:29:47.355021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.355032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.780 [2024-11-25 14:29:47.355037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.355048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:111880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.780 [2024-11-25 14:29:47.355054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.355064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:111640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.780 [2024-11-25 14:29:47.355070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.355081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:111704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.780 [2024-11-25 14:29:47.355087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.355098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.780 [2024-11-25 14:29:47.355103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.355114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.780 [2024-11-25 14:29:47.355121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.355132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.780 [2024-11-25 14:29:47.355138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.355148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.780 [2024-11-25 14:29:47.355154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.355170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.780 [2024-11-25 14:29:47.355176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.355187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.780 [2024-11-25 14:29:47.355192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.355203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.780 [2024-11-25 14:29:47.355208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.355219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.780 [2024-11-25 14:29:47.355225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.355236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.780 [2024-11-25 14:29:47.355241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.355252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.780 [2024-11-25 14:29:47.355257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.355268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.780 [2024-11-25 14:29:47.355273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.355284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.780 [2024-11-25 14:29:47.355290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.355300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.780 [2024-11-25 14:29:47.355306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.355317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:111632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.780 [2024-11-25 14:29:47.355324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.355335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.780 [2024-11-25 14:29:47.355340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.355351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.780 [2024-11-25 14:29:47.355356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.355367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.780 [2024-11-25 14:29:47.355372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.355383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.780 [2024-11-25 14:29:47.355389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.355399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:112208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.780 [2024-11-25 14:29:47.355405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:44.780 [2024-11-25 14:29:47.355416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.781 [2024-11-25 14:29:47.355421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.355433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.781 [2024-11-25 14:29:47.355438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.356845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:111912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.781 [2024-11-25 14:29:47.356858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.356871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:111944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.781 [2024-11-25 14:29:47.356877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.356888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:111976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.781 [2024-11-25 14:29:47.356893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.356904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.781 [2024-11-25 14:29:47.356910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.356921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.781 [2024-11-25 14:29:47.356927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.356940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.781 [2024-11-25 14:29:47.356946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.356957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.781 [2024-11-25 14:29:47.356962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.356973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:112304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.781 [2024-11-25 14:29:47.356979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.356989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.781 [2024-11-25 14:29:47.356995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.357005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.781 [2024-11-25 14:29:47.357011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.357022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.781 [2024-11-25 14:29:47.357027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.357038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.781 [2024-11-25 14:29:47.357044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.357055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.781 [2024-11-25 14:29:47.357060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.357071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:111656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.781 [2024-11-25 14:29:47.357076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.357087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:112128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.781 [2024-11-25 14:29:47.357093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.357103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.781 [2024-11-25 14:29:47.357109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.357120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.781 [2024-11-25 14:29:47.357125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.357138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.781 [2024-11-25 14:29:47.357143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.357154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.781 [2024-11-25 14:29:47.357164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.357175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.781 [2024-11-25 14:29:47.357180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.357191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.781 [2024-11-25 14:29:47.357197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.357207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.781 [2024-11-25 14:29:47.357213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.357224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.781 [2024-11-25 14:29:47.357229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.357240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.781 [2024-11-25 14:29:47.357245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.357256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.781 [2024-11-25 14:29:47.357262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.357273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:111744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.781 [2024-11-25 14:29:47.357278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.357289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.781 [2024-11-25 14:29:47.357295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.357305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.781 [2024-11-25 14:29:47.357311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.357322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.781 [2024-11-25 14:29:47.357327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.357340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.781 [2024-11-25 14:29:47.357345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.358361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.781 [2024-11-25 14:29:47.358373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.358385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:112344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.781 [2024-11-25 14:29:47.358391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.358402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.781 [2024-11-25 14:29:47.358408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.358418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.781 [2024-11-25 14:29:47.358424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.358435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.781 [2024-11-25 14:29:47.358440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.358451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.781 [2024-11-25 14:29:47.358456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:44.781 [2024-11-25 14:29:47.358467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.781 [2024-11-25 14:29:47.358473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.358484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.782 [2024-11-25 14:29:47.358489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.358500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.782 [2024-11-25 14:29:47.358505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.358516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.782 [2024-11-25 14:29:47.358522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.358533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.782 [2024-11-25 14:29:47.358539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.358550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.782 [2024-11-25 14:29:47.358558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.358569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.782 [2024-11-25 14:29:47.358575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.358586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.782 [2024-11-25 14:29:47.358591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.358602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.782 [2024-11-25 14:29:47.358607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.358618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.782 [2024-11-25 14:29:47.358624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.358635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.782 [2024-11-25 14:29:47.358640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.358651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.782 [2024-11-25 14:29:47.358657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.358667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:112480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.782 [2024-11-25 14:29:47.358673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.358683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.782 [2024-11-25 14:29:47.358689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.358700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.782 [2024-11-25 14:29:47.358705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.358716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.782 [2024-11-25 14:29:47.358722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.358732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:111944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.782 [2024-11-25 14:29:47.358738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.358749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.782 [2024-11-25 14:29:47.358755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.358766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:112272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.782 [2024-11-25 14:29:47.358771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.358782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:112304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.782 [2024-11-25 14:29:47.358788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.359163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.782 [2024-11-25 14:29:47.359173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.359186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:111784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.782 [2024-11-25 14:29:47.359191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.359202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.782 [2024-11-25 14:29:47.359208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.359219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.782 [2024-11-25 14:29:47.359225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.359235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:111808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.782 [2024-11-25 14:29:47.359241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.359252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.782 [2024-11-25 14:29:47.359257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.359268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.782 [2024-11-25 14:29:47.359273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.359284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.782 [2024-11-25 14:29:47.359290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.359303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.782 [2024-11-25 14:29:47.359309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.359320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.782 [2024-11-25 14:29:47.359327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.359340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.782 [2024-11-25 14:29:47.359347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.360334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:111968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.782 [2024-11-25 14:29:47.360345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.360357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.782 [2024-11-25 14:29:47.360363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.360374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.782 [2024-11-25 14:29:47.360379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.360390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.782 [2024-11-25 14:29:47.360396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.360407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.782 [2024-11-25 14:29:47.360412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.360423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.782 [2024-11-25 14:29:47.360429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.360440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.782 [2024-11-25 14:29:47.360445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.360456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.782 [2024-11-25 14:29:47.360462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:44.782 [2024-11-25 14:29:47.360473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.782 [2024-11-25 14:29:47.360478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.360489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.783 [2024-11-25 14:29:47.360494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.360505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.783 [2024-11-25 14:29:47.360511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.360524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.783 [2024-11-25 14:29:47.360530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.360541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.783 [2024-11-25 14:29:47.360546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.360557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.783 [2024-11-25 14:29:47.360563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.360574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.783 [2024-11-25 14:29:47.360580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.360590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:112344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.783 [2024-11-25 14:29:47.360596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.360607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:112376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.783 [2024-11-25 14:29:47.360613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.360623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.783 [2024-11-25 14:29:47.360629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.360640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:112056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.783 [2024-11-25 14:29:47.360645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.360656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.783 [2024-11-25 14:29:47.360661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.360672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:111984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.783 [2024-11-25 14:29:47.360678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.360689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.783 [2024-11-25 14:29:47.360694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.360705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.783 [2024-11-25 14:29:47.360711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.360722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.783 [2024-11-25 14:29:47.360728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.360739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.783 [2024-11-25 14:29:47.360744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.360755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.783 [2024-11-25 14:29:47.360761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.360772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.783 [2024-11-25 14:29:47.360777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.361691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:112304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.783 [2024-11-25 14:29:47.361703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.361717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.783 [2024-11-25 14:29:47.361724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.361736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.783 [2024-11-25 14:29:47.361742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.361753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.783 [2024-11-25 14:29:47.361760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.361771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:111704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.783 [2024-11-25 14:29:47.361777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.361788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.783 [2024-11-25 14:29:47.361794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.361805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.783 [2024-11-25 14:29:47.361811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.361821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.783 [2024-11-25 14:29:47.361828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.361840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:111768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.783 [2024-11-25 14:29:47.361849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.361861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.783 [2024-11-25 14:29:47.361867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.361877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.783 [2024-11-25 14:29:47.361883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.361895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.783 [2024-11-25 14:29:47.361900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.361911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:112680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.783 [2024-11-25 14:29:47.361916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.361928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.783 [2024-11-25 14:29:47.361935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.361948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.783 [2024-11-25 14:29:47.361954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.361965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.783 [2024-11-25 14:29:47.361970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.361981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.783 [2024-11-25 14:29:47.361987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.361998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.783 [2024-11-25 14:29:47.362004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.362014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.783 [2024-11-25 14:29:47.362020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.362031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.783 [2024-11-25 14:29:47.362038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:44.783 [2024-11-25 14:29:47.362050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.783 [2024-11-25 14:29:47.362058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.362070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.784 [2024-11-25 14:29:47.362075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.362086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.784 [2024-11-25 14:29:47.362092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.362103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.784 [2024-11-25 14:29:47.362108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.362119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.784 [2024-11-25 14:29:47.362125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.362135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.784 [2024-11-25 14:29:47.362141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.362152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.784 [2024-11-25 14:29:47.362157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.362172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.784 [2024-11-25 14:29:47.362178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.362674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:112616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.784 [2024-11-25 14:29:47.362683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.362695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.784 [2024-11-25 14:29:47.362701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.362712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.784 [2024-11-25 14:29:47.362718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.362728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:112344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.784 [2024-11-25 14:29:47.362734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.362745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.784 [2024-11-25 14:29:47.362750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.362765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.784 [2024-11-25 14:29:47.362771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.362782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.784 [2024-11-25 14:29:47.362787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.362798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:112464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.784 [2024-11-25 14:29:47.362804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.362815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.784 [2024-11-25 14:29:47.362820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.363859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.784 [2024-11-25 14:29:47.363871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.363882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.784 [2024-11-25 14:29:47.363888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.363898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.784 [2024-11-25 14:29:47.363904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.363914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.784 [2024-11-25 14:29:47.363919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.363929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.784 [2024-11-25 14:29:47.363935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.363945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.784 [2024-11-25 14:29:47.363950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.363961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.784 [2024-11-25 14:29:47.363966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.363976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.784 [2024-11-25 14:29:47.363981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.363995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.784 [2024-11-25 14:29:47.364000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.364010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.784 [2024-11-25 14:29:47.364015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.364026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.784 [2024-11-25 14:29:47.364031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.364041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.784 [2024-11-25 14:29:47.364047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.364057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:112936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.784 [2024-11-25 14:29:47.364063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.364074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.784 [2024-11-25 14:29:47.364080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.364091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.784 [2024-11-25 14:29:47.364096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.364107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.784 [2024-11-25 14:29:47.364112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.364122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.784 [2024-11-25 14:29:47.364129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.364139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.784 [2024-11-25 14:29:47.364144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.364155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.784 [2024-11-25 14:29:47.364164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.364174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.784 [2024-11-25 14:29:47.364180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.364192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.784 [2024-11-25 14:29:47.364198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.364208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.784 [2024-11-25 14:29:47.364214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:44.784 [2024-11-25 14:29:47.364224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.784 [2024-11-25 14:29:47.364229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.364239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.785 [2024-11-25 14:29:47.364245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.364256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.785 [2024-11-25 14:29:47.364262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.364272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.785 [2024-11-25 14:29:47.364278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.364982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.785 [2024-11-25 14:29:47.364993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.785 [2024-11-25 14:29:47.365010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.785 [2024-11-25 14:29:47.365025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.785 [2024-11-25 14:29:47.365041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.785 [2024-11-25 14:29:47.365057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.785 [2024-11-25 14:29:47.365073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.785 [2024-11-25 14:29:47.365091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.785 [2024-11-25 14:29:47.365106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:112344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.785 [2024-11-25 14:29:47.365122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.785 [2024-11-25 14:29:47.365137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.785 [2024-11-25 14:29:47.365153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.785 [2024-11-25 14:29:47.365173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:112944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.785 [2024-11-25 14:29:47.365189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.785 [2024-11-25 14:29:47.365205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:111856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.785 [2024-11-25 14:29:47.365221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.785 [2024-11-25 14:29:47.365558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.785 [2024-11-25 14:29:47.365575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:112960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.785 [2024-11-25 14:29:47.365591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.785 [2024-11-25 14:29:47.365608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.785 [2024-11-25 14:29:47.365624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.785 [2024-11-25 14:29:47.365639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.785 [2024-11-25 14:29:47.365655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.785 [2024-11-25 14:29:47.365671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.785 [2024-11-25 14:29:47.365686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.785 [2024-11-25 14:29:47.365702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.785 [2024-11-25 14:29:47.365717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.785 [2024-11-25 14:29:47.365732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.365743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.785 [2024-11-25 14:29:47.365748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.366027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.785 [2024-11-25 14:29:47.366035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.366047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.785 [2024-11-25 14:29:47.366052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.366062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.785 [2024-11-25 14:29:47.366068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:44.785 [2024-11-25 14:29:47.366080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.786 [2024-11-25 14:29:47.366085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.366096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.786 [2024-11-25 14:29:47.366101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.366111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.786 [2024-11-25 14:29:47.366116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.366127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.786 [2024-11-25 14:29:47.366132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.366142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.786 [2024-11-25 14:29:47.366147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.366163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.786 [2024-11-25 14:29:47.366169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.366179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.786 [2024-11-25 14:29:47.366185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.366195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.786 [2024-11-25 14:29:47.366200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.366210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.786 [2024-11-25 14:29:47.366216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.366227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.786 [2024-11-25 14:29:47.366232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.366242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.786 [2024-11-25 14:29:47.366247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.366915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.786 [2024-11-25 14:29:47.366924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.366938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.786 [2024-11-25 14:29:47.366943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.366953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.786 [2024-11-25 14:29:47.366959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.366969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.786 [2024-11-25 14:29:47.366974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.366984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.786 [2024-11-25 14:29:47.366990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.367000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.786 [2024-11-25 14:29:47.367005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.367015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.786 [2024-11-25 14:29:47.367021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.367031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.786 [2024-11-25 14:29:47.367036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.367046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.786 [2024-11-25 14:29:47.367052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.367062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.786 [2024-11-25 14:29:47.367067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.367077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:112520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.786 [2024-11-25 14:29:47.367083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.367093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.786 [2024-11-25 14:29:47.367098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.367109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.786 [2024-11-25 14:29:47.367114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.367126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.786 [2024-11-25 14:29:47.367131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.367141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.786 [2024-11-25 14:29:47.367147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.367157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.786 [2024-11-25 14:29:47.367167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.367177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.786 [2024-11-25 14:29:47.367183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.367193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.786 [2024-11-25 14:29:47.367198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.367208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.786 [2024-11-25 14:29:47.367214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.367224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.786 [2024-11-25 14:29:47.367230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.367240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.786 [2024-11-25 14:29:47.367245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.367255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.786 [2024-11-25 14:29:47.367261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.367271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.786 [2024-11-25 14:29:47.367276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.367287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.786 [2024-11-25 14:29:47.367292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.367302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.786 [2024-11-25 14:29:47.367308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.367318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.786 [2024-11-25 14:29:47.367325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.367335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.786 [2024-11-25 14:29:47.367340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:44.786 [2024-11-25 14:29:47.367350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.786 [2024-11-25 14:29:47.367356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.367943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.787 [2024-11-25 14:29:47.367955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.367966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.787 [2024-11-25 14:29:47.367971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.367982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.787 [2024-11-25 14:29:47.367987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.367997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.787 [2024-11-25 14:29:47.368003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.368013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.787 [2024-11-25 14:29:47.368018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.368028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.787 [2024-11-25 14:29:47.368033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.368043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:112920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.787 [2024-11-25 14:29:47.368049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.368059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.787 [2024-11-25 14:29:47.368065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.368075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.787 [2024-11-25 14:29:47.368080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.368090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.787 [2024-11-25 14:29:47.368097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.368845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.787 [2024-11-25 14:29:47.368856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.368867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.787 [2024-11-25 14:29:47.368873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.368883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.787 [2024-11-25 14:29:47.368888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.368898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.787 [2024-11-25 14:29:47.368903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.368914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.787 [2024-11-25 14:29:47.368919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.368929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.787 [2024-11-25 14:29:47.368934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.368944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.787 [2024-11-25 14:29:47.368949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.368959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.787 [2024-11-25 14:29:47.368964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.368974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.787 [2024-11-25 14:29:47.368979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.368990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.787 [2024-11-25 14:29:47.368995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.369005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.787 [2024-11-25 14:29:47.369010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.369020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.787 [2024-11-25 14:29:47.369028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.369039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.787 [2024-11-25 14:29:47.369044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.369054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.787 [2024-11-25 14:29:47.369059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.369069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.787 [2024-11-25 14:29:47.369075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.369085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.787 [2024-11-25 14:29:47.369090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.369100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.787 [2024-11-25 14:29:47.369105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.369116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.787 [2024-11-25 14:29:47.369121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.369131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.787 [2024-11-25 14:29:47.369136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.369147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.787 [2024-11-25 14:29:47.369152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.369166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.787 [2024-11-25 14:29:47.369171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.369181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.787 [2024-11-25 14:29:47.369187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.369197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:112976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.787 [2024-11-25 14:29:47.369203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.369213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.787 [2024-11-25 14:29:47.369218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.369230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.787 [2024-11-25 14:29:47.369235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.370254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.787 [2024-11-25 14:29:47.370265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.370277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.787 [2024-11-25 14:29:47.370282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:44.787 [2024-11-25 14:29:47.370292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.788 [2024-11-25 14:29:47.370297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.788 [2024-11-25 14:29:47.370312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.788 [2024-11-25 14:29:47.370327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.788 [2024-11-25 14:29:47.370343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.788 [2024-11-25 14:29:47.370358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.788 [2024-11-25 14:29:47.370373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.788 [2024-11-25 14:29:47.370388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.788 [2024-11-25 14:29:47.370403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.788 [2024-11-25 14:29:47.370418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.788 [2024-11-25 14:29:47.370436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.788 [2024-11-25 14:29:47.370451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.788 [2024-11-25 14:29:47.370467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.788 [2024-11-25 14:29:47.370482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.788 [2024-11-25 14:29:47.370499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.788 [2024-11-25 14:29:47.370515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.788 [2024-11-25 14:29:47.370533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.788 [2024-11-25 14:29:47.370550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.788 [2024-11-25 14:29:47.370567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.788 [2024-11-25 14:29:47.370585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.788 [2024-11-25 14:29:47.370603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.788 [2024-11-25 14:29:47.370622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.788 [2024-11-25 14:29:47.370641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.788 [2024-11-25 14:29:47.370656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.788 [2024-11-25 14:29:47.370673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.788 [2024-11-25 14:29:47.370689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.788 [2024-11-25 14:29:47.370704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.788 [2024-11-25 14:29:47.370720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.788 [2024-11-25 14:29:47.370735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.788 [2024-11-25 14:29:47.370751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.788 [2024-11-25 14:29:47.370766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.788 [2024-11-25 14:29:47.370782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.788 [2024-11-25 14:29:47.370797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.788 [2024-11-25 14:29:47.370813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.788 [2024-11-25 14:29:47.370830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.370840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.788 [2024-11-25 14:29:47.370845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.372656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.788 [2024-11-25 14:29:47.372670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.372682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.788 [2024-11-25 14:29:47.372687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.372698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.788 [2024-11-25 14:29:47.372703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:44.788 [2024-11-25 14:29:47.372713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.788 [2024-11-25 14:29:47.372718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.372728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.789 [2024-11-25 14:29:47.372733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.372743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.789 [2024-11-25 14:29:47.372748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.372758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.789 [2024-11-25 14:29:47.372763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.372773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.789 [2024-11-25 14:29:47.372778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.372788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.789 [2024-11-25 14:29:47.372793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.372803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.789 [2024-11-25 14:29:47.372808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.372818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.789 [2024-11-25 14:29:47.372828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.372838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.789 [2024-11-25 14:29:47.372843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.372853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.789 [2024-11-25 14:29:47.372859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.372869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.789 [2024-11-25 14:29:47.372874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.372884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.789 [2024-11-25 14:29:47.372889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.372899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.789 [2024-11-25 14:29:47.372905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.372915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.789 [2024-11-25 14:29:47.372920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.372930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.789 [2024-11-25 14:29:47.372935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.372945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.789 [2024-11-25 14:29:47.372951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.372961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.789 [2024-11-25 14:29:47.372966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.372976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.789 [2024-11-25 14:29:47.372981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.372992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.789 [2024-11-25 14:29:47.372997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.373007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.789 [2024-11-25 14:29:47.373012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.373023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.789 [2024-11-25 14:29:47.373029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.373039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.789 [2024-11-25 14:29:47.373044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.373054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.789 [2024-11-25 14:29:47.373059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.373070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.789 [2024-11-25 14:29:47.373075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.373085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.789 [2024-11-25 14:29:47.373090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.373100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.789 [2024-11-25 14:29:47.373106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.373116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.789 [2024-11-25 14:29:47.373121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.373131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.789 [2024-11-25 14:29:47.373136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.373146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.789 [2024-11-25 14:29:47.373151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.373166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.789 [2024-11-25 14:29:47.373171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.373182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.789 [2024-11-25 14:29:47.373187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.373197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.789 [2024-11-25 14:29:47.373202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.373214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.789 [2024-11-25 14:29:47.373219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.373229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:112528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.789 [2024-11-25 14:29:47.373235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:44.789 [2024-11-25 14:29:47.373245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.790 [2024-11-25 14:29:47.373250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.373261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.790 [2024-11-25 14:29:47.373266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.373276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.790 [2024-11-25 14:29:47.373282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.373806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.790 [2024-11-25 14:29:47.373815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.373826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.790 [2024-11-25 14:29:47.373832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.373842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.790 [2024-11-25 14:29:47.373848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.373858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.790 [2024-11-25 14:29:47.373863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.373874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.790 [2024-11-25 14:29:47.373879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.373889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.790 [2024-11-25 14:29:47.373894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.373904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.790 [2024-11-25 14:29:47.373910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.373920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.790 [2024-11-25 14:29:47.373927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.373938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.790 [2024-11-25 14:29:47.373943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.373953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.790 [2024-11-25 14:29:47.373958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.373968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.790 [2024-11-25 14:29:47.373974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.373984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.790 [2024-11-25 14:29:47.373989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.373999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.790 [2024-11-25 14:29:47.374005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.374015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.790 [2024-11-25 14:29:47.374020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.374031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.790 [2024-11-25 14:29:47.374036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.374046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.790 [2024-11-25 14:29:47.374052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.375126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.790 [2024-11-25 14:29:47.375138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.375150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.790 [2024-11-25 14:29:47.375155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.375170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.790 [2024-11-25 14:29:47.375175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.375185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.790 [2024-11-25 14:29:47.375193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.375203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.790 [2024-11-25 14:29:47.375208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.375218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.790 [2024-11-25 14:29:47.375223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.375233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.790 [2024-11-25 14:29:47.375238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.375248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.790 [2024-11-25 14:29:47.375253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.375263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.790 [2024-11-25 14:29:47.375268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.375278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.790 [2024-11-25 14:29:47.375283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.375293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.790 [2024-11-25 14:29:47.375298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.375308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.790 [2024-11-25 14:29:47.375314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.375324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.790 [2024-11-25 14:29:47.375329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.375339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.790 [2024-11-25 14:29:47.375344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.375354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.790 [2024-11-25 14:29:47.375360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.375370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.790 [2024-11-25 14:29:47.375375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.375387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.790 [2024-11-25 14:29:47.375392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.375402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.790 [2024-11-25 14:29:47.375407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.375417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.790 [2024-11-25 14:29:47.375423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.375433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.790 [2024-11-25 14:29:47.375438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:44.790 [2024-11-25 14:29:47.375448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.791 [2024-11-25 14:29:47.375454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.375464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.791 [2024-11-25 14:29:47.375469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.375480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.791 [2024-11-25 14:29:47.375485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.375495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.791 [2024-11-25 14:29:47.375500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.375510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.791 [2024-11-25 14:29:47.375516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.375526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.791 [2024-11-25 14:29:47.375531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.375541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.791 [2024-11-25 14:29:47.375546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.375557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.791 [2024-11-25 14:29:47.375562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.375574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.791 [2024-11-25 14:29:47.375579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.375589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.791 [2024-11-25 14:29:47.375594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.375604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.791 [2024-11-25 14:29:47.375610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.375620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.791 [2024-11-25 14:29:47.375625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.375636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.791 [2024-11-25 14:29:47.375641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.375651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.791 [2024-11-25 14:29:47.375656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.375667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.791 [2024-11-25 14:29:47.375672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.375682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.791 [2024-11-25 14:29:47.375687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.375698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.791 [2024-11-25 14:29:47.375703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.375713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.791 [2024-11-25 14:29:47.375718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.375728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.791 [2024-11-25 14:29:47.375734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.375744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.791 [2024-11-25 14:29:47.375749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.375760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.791 [2024-11-25 14:29:47.375766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.375776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.791 [2024-11-25 14:29:47.375781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.375791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.791 [2024-11-25 14:29:47.375797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.375807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.791 [2024-11-25 14:29:47.375812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.375822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.791 [2024-11-25 14:29:47.375828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.375838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.791 [2024-11-25 14:29:47.375843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.377153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.791 [2024-11-25 14:29:47.377170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.377182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.791 [2024-11-25 14:29:47.377187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.377197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.791 [2024-11-25 14:29:47.377202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.377212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.791 [2024-11-25 14:29:47.377217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.377228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.791 [2024-11-25 14:29:47.377233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.377243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.791 [2024-11-25 14:29:47.377248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.377258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.791 [2024-11-25 14:29:47.377266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.377276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.791 [2024-11-25 14:29:47.377281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.377291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.791 [2024-11-25 14:29:47.377296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.377306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.791 [2024-11-25 14:29:47.377311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.377321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.791 [2024-11-25 14:29:47.377327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.377336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.791 [2024-11-25 14:29:47.377341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:44.791 [2024-11-25 14:29:47.377352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.792 [2024-11-25 14:29:47.377357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.377367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.792 [2024-11-25 14:29:47.377372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.377383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.792 [2024-11-25 14:29:47.377388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.377398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.792 [2024-11-25 14:29:47.377403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.377413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.792 [2024-11-25 14:29:47.377418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.377428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.792 [2024-11-25 14:29:47.377434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.377444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.792 [2024-11-25 14:29:47.377450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.377460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.792 [2024-11-25 14:29:47.377465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.377475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.792 [2024-11-25 14:29:47.377481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.377491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.792 [2024-11-25 14:29:47.377496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.377506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.792 [2024-11-25 14:29:47.377512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.378237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.792 [2024-11-25 14:29:47.378247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.378259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.792 [2024-11-25 14:29:47.378264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.378274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.792 [2024-11-25 14:29:47.378279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.378289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.792 [2024-11-25 14:29:47.378294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.378304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.792 [2024-11-25 14:29:47.378309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.378319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.792 [2024-11-25 14:29:47.378324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.378334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.792 [2024-11-25 14:29:47.378339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.378349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.792 [2024-11-25 14:29:47.378354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.378367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.792 [2024-11-25 14:29:47.378372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.378382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.792 [2024-11-25 14:29:47.378387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.378397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.792 [2024-11-25 14:29:47.378402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.378412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.792 [2024-11-25 14:29:47.378417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.378427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.792 [2024-11-25 14:29:47.378432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.378442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.792 [2024-11-25 14:29:47.378446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.378457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.792 [2024-11-25 14:29:47.378462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.378472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.792 [2024-11-25 14:29:47.378477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.378487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.792 [2024-11-25 14:29:47.378492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.378502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.792 [2024-11-25 14:29:47.378507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.378517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.792 [2024-11-25 14:29:47.378522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.378532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.792 [2024-11-25 14:29:47.378537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.378561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.792 [2024-11-25 14:29:47.378566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.378577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.792 [2024-11-25 14:29:47.378582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.378592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.792 [2024-11-25 14:29:47.378597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.378607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.792 [2024-11-25 14:29:47.378612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:44.792 [2024-11-25 14:29:47.378623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.792 [2024-11-25 14:29:47.378628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.378638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.793 [2024-11-25 14:29:47.378643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.378653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.793 [2024-11-25 14:29:47.378658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.378668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.793 [2024-11-25 14:29:47.378674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.793 [2024-11-25 14:29:47.379216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.793 [2024-11-25 14:29:47.379243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.793 [2024-11-25 14:29:47.379259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.793 [2024-11-25 14:29:47.379274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.793 [2024-11-25 14:29:47.379292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.793 [2024-11-25 14:29:47.379308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.793 [2024-11-25 14:29:47.379324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.793 [2024-11-25 14:29:47.379339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.793 [2024-11-25 14:29:47.379355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.793 [2024-11-25 14:29:47.379371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.793 [2024-11-25 14:29:47.379386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.793 [2024-11-25 14:29:47.379402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.793 [2024-11-25 14:29:47.379417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.793 [2024-11-25 14:29:47.379433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.793 [2024-11-25 14:29:47.379448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.793 [2024-11-25 14:29:47.379464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.793 [2024-11-25 14:29:47.379481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.793 [2024-11-25 14:29:47.379775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.793 [2024-11-25 14:29:47.379792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.793 [2024-11-25 14:29:47.379807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.793 [2024-11-25 14:29:47.379823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.793 [2024-11-25 14:29:47.379839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.793 [2024-11-25 14:29:47.379854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:44.793 [2024-11-25 14:29:47.379869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.793 [2024-11-25 14:29:47.379885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.793 [2024-11-25 14:29:47.379901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:114016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.793 [2024-11-25 14:29:47.379916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:44.793 [2024-11-25 14:29:47.379927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:44.793 [2024-11-25 14:29:47.379932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:44.793 11960.32 IOPS, 46.72 MiB/s [2024-11-25T13:29:49.883Z] 11998.15 IOPS, 46.87 MiB/s [2024-11-25T13:29:49.883Z] Received shutdown signal, test time was about 26.663697 seconds 00:31:44.793 00:31:44.793 Latency(us) 00:31:44.793 [2024-11-25T13:29:49.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:44.793 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:44.793 Verification LBA range: start 0x0 length 0x4000 00:31:44.793 Nvme0n1 : 26.66 12019.91 46.95 0.00 0.00 10629.91 361.81 3019898.88 00:31:44.793 [2024-11-25T13:29:49.883Z] =================================================================================================================== 00:31:44.793 [2024-11-25T13:29:49.883Z] Total : 12019.91 46.95 0.00 0.00 10629.91 361.81 3019898.88 00:31:44.793 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:44.793 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:44.793 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:44.793 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:44.793 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:44.793 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:31:44.793 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:44.793 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:31:44.793 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:44.793 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:44.794 rmmod nvme_tcp 00:31:45.054 rmmod nvme_fabrics 00:31:45.054 rmmod nvme_keyring 00:31:45.054 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:45.054 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:31:45.054 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:31:45.054 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3563514 ']' 00:31:45.054 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3563514 00:31:45.054 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3563514 ']' 00:31:45.054 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3563514 00:31:45.054 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:31:45.054 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:45.054 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3563514 00:31:45.054 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:45.054 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:45.054 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3563514' 00:31:45.054 killing process with pid 3563514 00:31:45.054 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3563514 00:31:45.054 14:29:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3563514 00:31:45.054 14:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:45.054 14:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:45.054 14:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:45.054 14:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:31:45.054 14:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:31:45.054 14:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:45.054 14:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:31:45.054 14:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:45.054 14:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:45.054 14:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.054 14:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:45.054 14:29:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.599 14:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:47.599 00:31:47.599 real 0m40.792s 00:31:47.599 user 1m46.224s 00:31:47.599 sys 0m11.024s 00:31:47.599 14:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:47.599 14:29:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:47.599 ************************************ 00:31:47.599 END TEST nvmf_host_multipath_status 00:31:47.599 ************************************ 00:31:47.599 14:29:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:47.599 14:29:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:47.599 14:29:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:47.599 14:29:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:47.599 ************************************ 00:31:47.599 START TEST nvmf_discovery_remove_ifc 00:31:47.599 ************************************ 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:47.600 * Looking for test storage... 00:31:47.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:47.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.600 --rc genhtml_branch_coverage=1 00:31:47.600 --rc genhtml_function_coverage=1 00:31:47.600 --rc genhtml_legend=1 00:31:47.600 --rc geninfo_all_blocks=1 00:31:47.600 --rc geninfo_unexecuted_blocks=1 00:31:47.600 00:31:47.600 ' 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:47.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.600 --rc genhtml_branch_coverage=1 00:31:47.600 --rc genhtml_function_coverage=1 00:31:47.600 --rc genhtml_legend=1 00:31:47.600 --rc geninfo_all_blocks=1 00:31:47.600 --rc geninfo_unexecuted_blocks=1 00:31:47.600 00:31:47.600 ' 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:47.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.600 --rc genhtml_branch_coverage=1 00:31:47.600 --rc genhtml_function_coverage=1 00:31:47.600 --rc genhtml_legend=1 00:31:47.600 --rc geninfo_all_blocks=1 00:31:47.600 --rc geninfo_unexecuted_blocks=1 00:31:47.600 00:31:47.600 ' 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:47.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.600 --rc genhtml_branch_coverage=1 00:31:47.600 --rc genhtml_function_coverage=1 00:31:47.600 --rc genhtml_legend=1 00:31:47.600 --rc geninfo_all_blocks=1 00:31:47.600 --rc geninfo_unexecuted_blocks=1 00:31:47.600 00:31:47.600 ' 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:47.600 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:47.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:47.601 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:47.601 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:47.601 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:47.601 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:47.601 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:47.601 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:47.601 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:47.601 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:47.601 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:47.601 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:47.601 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:47.601 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:47.601 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:47.601 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:47.601 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:47.601 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.601 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:47.601 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.601 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:47.601 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:47.601 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:31:47.601 14:29:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:55.749 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:55.749 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:31:55.749 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:55.749 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:55.749 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:55.749 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:55.749 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:55.749 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:31:55.749 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:55.749 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:31:55.749 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:31:55.749 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:31:55.749 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:31:55.749 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:31:55.749 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:31:55.749 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:55.749 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:55.749 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:55.749 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:55.749 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:55.749 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:55.749 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:55.749 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:55.749 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:55.749 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:55.750 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:55.750 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:55.750 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:55.750 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:55.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:55.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.702 ms 00:31:55.750 00:31:55.750 --- 10.0.0.2 ping statistics --- 00:31:55.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.750 rtt min/avg/max/mdev = 0.702/0.702/0.702/0.000 ms 00:31:55.750 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:55.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:55.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:31:55.750 00:31:55.750 --- 10.0.0.1 ping statistics --- 00:31:55.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.750 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:31:55.751 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:55.751 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:31:55.751 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:55.751 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:55.751 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:55.751 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:55.751 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:55.751 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:55.751 14:29:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:55.751 14:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:55.751 14:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:55.751 14:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:55.751 14:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:55.751 14:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3573909 00:31:55.751 14:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3573909 00:31:55.751 14:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3573909 ']' 00:31:55.751 14:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:55.751 14:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.751 14:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:55.751 14:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.751 14:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:55.751 14:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:55.751 [2024-11-25 14:30:00.102016] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:31:55.751 [2024-11-25 14:30:00.102090] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.751 [2024-11-25 14:30:00.204960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.751 [2024-11-25 14:30:00.256445] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:55.751 [2024-11-25 14:30:00.256494] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:55.751 [2024-11-25 14:30:00.256503] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:55.751 [2024-11-25 14:30:00.256511] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:55.751 [2024-11-25 14:30:00.256518] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:55.751 [2024-11-25 14:30:00.257316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:56.014 14:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:56.014 14:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:31:56.014 14:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:56.014 14:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:56.014 14:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:56.014 14:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:56.014 14:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:56.014 14:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.014 14:30:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:56.014 [2024-11-25 14:30:00.984606] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:56.014 [2024-11-25 14:30:00.992919] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:56.014 null0 00:31:56.014 [2024-11-25 14:30:01.024824] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.014 14:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.014 14:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3574180 00:31:56.014 14:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3574180 /tmp/host.sock 00:31:56.014 14:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:56.014 14:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3574180 ']' 00:31:56.014 14:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:31:56.014 14:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:56.014 14:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:56.014 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:56.014 14:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:56.014 14:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:56.276 [2024-11-25 14:30:01.102806] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:31:56.276 [2024-11-25 14:30:01.102874] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3574180 ] 00:31:56.276 [2024-11-25 14:30:01.194642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.276 [2024-11-25 14:30:01.247185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.848 14:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:56.848 14:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:31:56.848 14:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:56.848 14:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:56.848 14:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.848 14:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:56.848 14:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.848 14:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:56.848 14:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.848 14:30:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:57.110 14:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.110 14:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:57.110 14:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.110 14:30:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:58.054 [2024-11-25 14:30:03.032702] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:58.054 [2024-11-25 14:30:03.032723] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:58.054 [2024-11-25 14:30:03.032736] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:58.315 [2024-11-25 14:30:03.162160] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:58.315 [2024-11-25 14:30:03.263114] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:31:58.315 [2024-11-25 14:30:03.264043] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x11d2430:1 started. 00:31:58.315 [2024-11-25 14:30:03.265617] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:58.315 [2024-11-25 14:30:03.265658] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:58.315 [2024-11-25 14:30:03.265680] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:58.315 [2024-11-25 14:30:03.265694] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:58.315 [2024-11-25 14:30:03.265714] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:58.315 14:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.315 14:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:58.315 14:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:58.315 14:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:58.315 [2024-11-25 14:30:03.272451] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x11d2430 was disconnected and freed. delete nvme_qpair. 00:31:58.315 14:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:58.315 14:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.315 14:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:58.315 14:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:58.315 14:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:58.315 14:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.315 14:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:58.315 14:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:58.315 14:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:58.576 14:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:58.576 14:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:58.576 14:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:58.576 14:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:58.576 14:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.576 14:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:58.576 14:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:58.576 14:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:58.576 14:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.576 14:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:58.576 14:30:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:59.519 14:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:59.519 14:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:59.519 14:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:59.519 14:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.519 14:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:59.519 14:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:59.519 14:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:59.519 14:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.520 14:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:59.520 14:30:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:00.464 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:00.464 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:00.464 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:00.464 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.464 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:00.464 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:00.464 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:00.726 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.726 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:00.726 14:30:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:01.667 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:01.667 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:01.667 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:01.667 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.667 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:01.667 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.667 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:01.667 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.667 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:01.667 14:30:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:02.611 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:02.611 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:02.611 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:02.611 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:02.611 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:02.611 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.611 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.611 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.611 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:02.611 14:30:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:03.997 14:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:03.997 14:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:03.997 14:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:03.997 14:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.997 14:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:03.997 14:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:03.997 14:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:03.997 [2024-11-25 14:30:08.706352] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:03.997 [2024-11-25 14:30:08.706387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.997 [2024-11-25 14:30:08.706397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.997 [2024-11-25 14:30:08.706403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.997 [2024-11-25 14:30:08.706408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.997 [2024-11-25 14:30:08.706414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.997 [2024-11-25 14:30:08.706419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.997 [2024-11-25 14:30:08.706425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.997 [2024-11-25 14:30:08.706430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.997 [2024-11-25 14:30:08.706436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:03.997 [2024-11-25 14:30:08.706441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:03.997 [2024-11-25 14:30:08.706446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aec80 is same with the state(6) to be set 00:32:03.997 [2024-11-25 14:30:08.716374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11aec80 (9): Bad file descriptor 00:32:03.997 [2024-11-25 14:30:08.726406] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:32:03.997 [2024-11-25 14:30:08.726416] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:32:03.997 [2024-11-25 14:30:08.726419] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:03.997 [2024-11-25 14:30:08.726423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:03.997 [2024-11-25 14:30:08.726438] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:03.997 14:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.997 14:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:03.997 14:30:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:05.084 [2024-11-25 14:30:09.755245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:05.084 [2024-11-25 14:30:09.755345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11aec80 with addr=10.0.0.2, port=4420 00:32:05.084 [2024-11-25 14:30:09.755379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11aec80 is same with the state(6) to be set 00:32:05.084 [2024-11-25 14:30:09.755442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11aec80 (9): Bad file descriptor 00:32:05.084 [2024-11-25 14:30:09.755564] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:32:05.084 [2024-11-25 14:30:09.755622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:05.084 [2024-11-25 14:30:09.755645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:05.084 [2024-11-25 14:30:09.755680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:05.084 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:05.084 [2024-11-25 14:30:09.755702] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:05.084 [2024-11-25 14:30:09.755720] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:05.084 [2024-11-25 14:30:09.755735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:05.084 [2024-11-25 14:30:09.755758] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:32:05.084 [2024-11-25 14:30:09.755773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:05.084 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:05.084 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:05.084 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.084 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:05.084 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:05.084 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:05.084 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.084 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:05.084 14:30:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:05.674 [2024-11-25 14:30:10.758184] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:32:05.674 [2024-11-25 14:30:10.758209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:05.674 [2024-11-25 14:30:10.758225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:05.674 [2024-11-25 14:30:10.758232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:05.674 [2024-11-25 14:30:10.758239] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:32:05.674 [2024-11-25 14:30:10.758244] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:32:05.674 [2024-11-25 14:30:10.758249] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:32:05.674 [2024-11-25 14:30:10.758253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:05.674 [2024-11-25 14:30:10.758275] bdev_nvme.c:7230:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:05.674 [2024-11-25 14:30:10.758299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:05.674 [2024-11-25 14:30:10.758306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.674 [2024-11-25 14:30:10.758314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:05.674 [2024-11-25 14:30:10.758319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.674 [2024-11-25 14:30:10.758325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:05.674 [2024-11-25 14:30:10.758330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.674 [2024-11-25 14:30:10.758339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:05.674 [2024-11-25 14:30:10.758344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.674 [2024-11-25 14:30:10.758350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:05.674 [2024-11-25 14:30:10.758355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.674 [2024-11-25 14:30:10.758361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:32:05.674 [2024-11-25 14:30:10.758560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119e3c0 (9): Bad file descriptor 00:32:05.674 [2024-11-25 14:30:10.759568] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:05.674 [2024-11-25 14:30:10.759577] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:32:05.935 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:05.935 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:05.935 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:05.935 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.935 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:05.935 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:05.935 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:05.935 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.935 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:05.935 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:05.935 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:05.935 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:05.935 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:05.935 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:05.935 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:05.935 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.935 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:05.935 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:05.935 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:05.935 14:30:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.935 14:30:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:05.935 14:30:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:07.319 14:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:07.319 14:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:07.319 14:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:07.319 14:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:07.319 14:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:07.319 14:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:07.319 14:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:07.319 14:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:07.319 14:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:07.319 14:30:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:07.892 [2024-11-25 14:30:12.819306] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:07.892 [2024-11-25 14:30:12.819323] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:07.892 [2024-11-25 14:30:12.819334] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:07.892 [2024-11-25 14:30:12.907584] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:08.153 [2024-11-25 14:30:13.008390] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:32:08.153 [2024-11-25 14:30:13.009080] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x11b9300:1 started. 00:32:08.153 [2024-11-25 14:30:13.009976] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:08.153 [2024-11-25 14:30:13.010003] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:08.153 [2024-11-25 14:30:13.010018] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:08.154 [2024-11-25 14:30:13.010029] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:08.154 [2024-11-25 14:30:13.010035] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:08.154 [2024-11-25 14:30:13.016436] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x11b9300 was disconnected and freed. delete nvme_qpair. 00:32:08.154 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:08.154 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:08.154 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:08.154 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:08.154 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:08.154 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:08.154 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:08.154 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:08.154 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:08.154 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:08.154 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3574180 00:32:08.154 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3574180 ']' 00:32:08.154 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3574180 00:32:08.154 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:32:08.154 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:08.154 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3574180 00:32:08.154 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:08.154 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:08.154 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3574180' 00:32:08.154 killing process with pid 3574180 00:32:08.154 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3574180 00:32:08.154 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3574180 00:32:08.414 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:08.414 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:08.414 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:32:08.414 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:08.415 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:32:08.415 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:08.415 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:08.415 rmmod nvme_tcp 00:32:08.415 rmmod nvme_fabrics 00:32:08.415 rmmod nvme_keyring 00:32:08.415 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:08.415 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:32:08.415 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:32:08.415 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3573909 ']' 00:32:08.415 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3573909 00:32:08.415 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3573909 ']' 00:32:08.415 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3573909 00:32:08.415 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:32:08.415 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:08.415 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3573909 00:32:08.415 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:08.415 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:08.415 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3573909' 00:32:08.415 killing process with pid 3573909 00:32:08.415 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3573909 00:32:08.415 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3573909 00:32:08.676 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:08.676 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:08.676 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:08.676 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:32:08.676 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:32:08.676 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:08.676 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:32:08.676 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:08.676 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:08.676 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:08.676 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:08.676 14:30:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:10.591 14:30:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:10.591 00:32:10.591 real 0m23.367s 00:32:10.591 user 0m27.309s 00:32:10.591 sys 0m7.147s 00:32:10.591 14:30:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:10.591 14:30:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:10.591 ************************************ 00:32:10.591 END TEST nvmf_discovery_remove_ifc 00:32:10.591 ************************************ 00:32:10.591 14:30:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:10.591 14:30:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:10.591 14:30:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:10.591 14:30:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.853 ************************************ 00:32:10.853 START TEST nvmf_identify_kernel_target 00:32:10.853 ************************************ 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:10.853 * Looking for test storage... 00:32:10.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:10.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.853 --rc genhtml_branch_coverage=1 00:32:10.853 --rc genhtml_function_coverage=1 00:32:10.853 --rc genhtml_legend=1 00:32:10.853 --rc geninfo_all_blocks=1 00:32:10.853 --rc geninfo_unexecuted_blocks=1 00:32:10.853 00:32:10.853 ' 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:10.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.853 --rc genhtml_branch_coverage=1 00:32:10.853 --rc genhtml_function_coverage=1 00:32:10.853 --rc genhtml_legend=1 00:32:10.853 --rc geninfo_all_blocks=1 00:32:10.853 --rc geninfo_unexecuted_blocks=1 00:32:10.853 00:32:10.853 ' 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:10.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.853 --rc genhtml_branch_coverage=1 00:32:10.853 --rc genhtml_function_coverage=1 00:32:10.853 --rc genhtml_legend=1 00:32:10.853 --rc geninfo_all_blocks=1 00:32:10.853 --rc geninfo_unexecuted_blocks=1 00:32:10.853 00:32:10.853 ' 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:10.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.853 --rc genhtml_branch_coverage=1 00:32:10.853 --rc genhtml_function_coverage=1 00:32:10.853 --rc genhtml_legend=1 00:32:10.853 --rc geninfo_all_blocks=1 00:32:10.853 --rc geninfo_unexecuted_blocks=1 00:32:10.853 00:32:10.853 ' 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:10.853 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:10.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:32:10.854 14:30:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:19.001 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:19.001 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:19.001 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:19.001 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:19.002 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:19.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:19.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:32:19.002 00:32:19.002 --- 10.0.0.2 ping statistics --- 00:32:19.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:19.002 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:19.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:19.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:32:19.002 00:32:19.002 --- 10.0.0.1 ping statistics --- 00:32:19.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:19.002 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:19.002 14:30:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:22.311 Waiting for block devices as requested 00:32:22.311 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:22.311 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:22.311 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:22.311 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:22.311 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:22.572 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:22.572 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:22.572 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:22.572 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:22.834 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:23.095 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:23.096 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:23.096 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:23.096 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:23.356 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:23.356 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:23.356 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:23.929 No valid GPT data, bailing 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:23.929 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:32:23.929 00:32:23.929 Discovery Log Number of Records 2, Generation counter 2 00:32:23.929 =====Discovery Log Entry 0====== 00:32:23.929 trtype: tcp 00:32:23.929 adrfam: ipv4 00:32:23.929 subtype: current discovery subsystem 00:32:23.929 treq: not specified, sq flow control disable supported 00:32:23.929 portid: 1 00:32:23.929 trsvcid: 4420 00:32:23.929 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:23.929 traddr: 10.0.0.1 00:32:23.929 eflags: none 00:32:23.929 sectype: none 00:32:23.929 =====Discovery Log Entry 1====== 00:32:23.929 trtype: tcp 00:32:23.929 adrfam: ipv4 00:32:23.929 subtype: nvme subsystem 00:32:23.929 treq: not specified, sq flow control disable supported 00:32:23.929 portid: 1 00:32:23.929 trsvcid: 4420 00:32:23.929 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:23.929 traddr: 10.0.0.1 00:32:23.929 eflags: none 00:32:23.929 sectype: none 00:32:23.930 14:30:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:23.930 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:24.192 ===================================================== 00:32:24.192 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:24.192 ===================================================== 00:32:24.192 Controller Capabilities/Features 00:32:24.192 ================================ 00:32:24.192 Vendor ID: 0000 00:32:24.192 Subsystem Vendor ID: 0000 00:32:24.192 Serial Number: c5294e844b2635bec5dd 00:32:24.192 Model Number: Linux 00:32:24.192 Firmware Version: 6.8.9-20 00:32:24.192 Recommended Arb Burst: 0 00:32:24.192 IEEE OUI Identifier: 00 00 00 00:32:24.192 Multi-path I/O 00:32:24.192 May have multiple subsystem ports: No 00:32:24.192 May have multiple controllers: No 00:32:24.192 Associated with SR-IOV VF: No 00:32:24.192 Max Data Transfer Size: Unlimited 00:32:24.192 Max Number of Namespaces: 0 00:32:24.192 Max Number of I/O Queues: 1024 00:32:24.192 NVMe Specification Version (VS): 1.3 00:32:24.192 NVMe Specification Version (Identify): 1.3 00:32:24.192 Maximum Queue Entries: 1024 00:32:24.192 Contiguous Queues Required: No 00:32:24.192 Arbitration Mechanisms Supported 00:32:24.192 Weighted Round Robin: Not Supported 00:32:24.192 Vendor Specific: Not Supported 00:32:24.192 Reset Timeout: 7500 ms 00:32:24.192 Doorbell Stride: 4 bytes 00:32:24.192 NVM Subsystem Reset: Not Supported 00:32:24.192 Command Sets Supported 00:32:24.192 NVM Command Set: Supported 00:32:24.192 Boot Partition: Not Supported 00:32:24.192 Memory Page Size Minimum: 4096 bytes 00:32:24.192 Memory Page Size Maximum: 4096 bytes 00:32:24.192 Persistent Memory Region: Not Supported 00:32:24.192 Optional Asynchronous Events Supported 00:32:24.192 Namespace Attribute Notices: Not Supported 00:32:24.192 Firmware Activation Notices: Not Supported 00:32:24.192 ANA Change Notices: Not Supported 00:32:24.192 PLE Aggregate Log Change Notices: Not Supported 00:32:24.192 LBA Status Info Alert Notices: Not Supported 00:32:24.192 EGE Aggregate Log Change Notices: Not Supported 00:32:24.192 Normal NVM Subsystem Shutdown event: Not Supported 00:32:24.192 Zone Descriptor Change Notices: Not Supported 00:32:24.192 Discovery Log Change Notices: Supported 00:32:24.192 Controller Attributes 00:32:24.192 128-bit Host Identifier: Not Supported 00:32:24.192 Non-Operational Permissive Mode: Not Supported 00:32:24.192 NVM Sets: Not Supported 00:32:24.192 Read Recovery Levels: Not Supported 00:32:24.192 Endurance Groups: Not Supported 00:32:24.192 Predictable Latency Mode: Not Supported 00:32:24.192 Traffic Based Keep ALive: Not Supported 00:32:24.192 Namespace Granularity: Not Supported 00:32:24.192 SQ Associations: Not Supported 00:32:24.192 UUID List: Not Supported 00:32:24.192 Multi-Domain Subsystem: Not Supported 00:32:24.192 Fixed Capacity Management: Not Supported 00:32:24.192 Variable Capacity Management: Not Supported 00:32:24.192 Delete Endurance Group: Not Supported 00:32:24.192 Delete NVM Set: Not Supported 00:32:24.192 Extended LBA Formats Supported: Not Supported 00:32:24.192 Flexible Data Placement Supported: Not Supported 00:32:24.192 00:32:24.192 Controller Memory Buffer Support 00:32:24.192 ================================ 00:32:24.192 Supported: No 00:32:24.192 00:32:24.192 Persistent Memory Region Support 00:32:24.192 ================================ 00:32:24.192 Supported: No 00:32:24.192 00:32:24.192 Admin Command Set Attributes 00:32:24.192 ============================ 00:32:24.192 Security Send/Receive: Not Supported 00:32:24.192 Format NVM: Not Supported 00:32:24.192 Firmware Activate/Download: Not Supported 00:32:24.192 Namespace Management: Not Supported 00:32:24.192 Device Self-Test: Not Supported 00:32:24.192 Directives: Not Supported 00:32:24.192 NVMe-MI: Not Supported 00:32:24.192 Virtualization Management: Not Supported 00:32:24.192 Doorbell Buffer Config: Not Supported 00:32:24.192 Get LBA Status Capability: Not Supported 00:32:24.192 Command & Feature Lockdown Capability: Not Supported 00:32:24.192 Abort Command Limit: 1 00:32:24.192 Async Event Request Limit: 1 00:32:24.192 Number of Firmware Slots: N/A 00:32:24.192 Firmware Slot 1 Read-Only: N/A 00:32:24.192 Firmware Activation Without Reset: N/A 00:32:24.192 Multiple Update Detection Support: N/A 00:32:24.192 Firmware Update Granularity: No Information Provided 00:32:24.192 Per-Namespace SMART Log: No 00:32:24.192 Asymmetric Namespace Access Log Page: Not Supported 00:32:24.192 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:24.192 Command Effects Log Page: Not Supported 00:32:24.192 Get Log Page Extended Data: Supported 00:32:24.192 Telemetry Log Pages: Not Supported 00:32:24.192 Persistent Event Log Pages: Not Supported 00:32:24.192 Supported Log Pages Log Page: May Support 00:32:24.192 Commands Supported & Effects Log Page: Not Supported 00:32:24.193 Feature Identifiers & Effects Log Page:May Support 00:32:24.193 NVMe-MI Commands & Effects Log Page: May Support 00:32:24.193 Data Area 4 for Telemetry Log: Not Supported 00:32:24.193 Error Log Page Entries Supported: 1 00:32:24.193 Keep Alive: Not Supported 00:32:24.193 00:32:24.193 NVM Command Set Attributes 00:32:24.193 ========================== 00:32:24.193 Submission Queue Entry Size 00:32:24.193 Max: 1 00:32:24.193 Min: 1 00:32:24.193 Completion Queue Entry Size 00:32:24.193 Max: 1 00:32:24.193 Min: 1 00:32:24.193 Number of Namespaces: 0 00:32:24.193 Compare Command: Not Supported 00:32:24.193 Write Uncorrectable Command: Not Supported 00:32:24.193 Dataset Management Command: Not Supported 00:32:24.193 Write Zeroes Command: Not Supported 00:32:24.193 Set Features Save Field: Not Supported 00:32:24.193 Reservations: Not Supported 00:32:24.193 Timestamp: Not Supported 00:32:24.193 Copy: Not Supported 00:32:24.193 Volatile Write Cache: Not Present 00:32:24.193 Atomic Write Unit (Normal): 1 00:32:24.193 Atomic Write Unit (PFail): 1 00:32:24.193 Atomic Compare & Write Unit: 1 00:32:24.193 Fused Compare & Write: Not Supported 00:32:24.193 Scatter-Gather List 00:32:24.193 SGL Command Set: Supported 00:32:24.193 SGL Keyed: Not Supported 00:32:24.193 SGL Bit Bucket Descriptor: Not Supported 00:32:24.193 SGL Metadata Pointer: Not Supported 00:32:24.193 Oversized SGL: Not Supported 00:32:24.193 SGL Metadata Address: Not Supported 00:32:24.193 SGL Offset: Supported 00:32:24.193 Transport SGL Data Block: Not Supported 00:32:24.193 Replay Protected Memory Block: Not Supported 00:32:24.193 00:32:24.193 Firmware Slot Information 00:32:24.193 ========================= 00:32:24.193 Active slot: 0 00:32:24.193 00:32:24.193 00:32:24.193 Error Log 00:32:24.193 ========= 00:32:24.193 00:32:24.193 Active Namespaces 00:32:24.193 ================= 00:32:24.193 Discovery Log Page 00:32:24.193 ================== 00:32:24.193 Generation Counter: 2 00:32:24.193 Number of Records: 2 00:32:24.193 Record Format: 0 00:32:24.193 00:32:24.193 Discovery Log Entry 0 00:32:24.193 ---------------------- 00:32:24.193 Transport Type: 3 (TCP) 00:32:24.193 Address Family: 1 (IPv4) 00:32:24.193 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:24.193 Entry Flags: 00:32:24.193 Duplicate Returned Information: 0 00:32:24.193 Explicit Persistent Connection Support for Discovery: 0 00:32:24.193 Transport Requirements: 00:32:24.193 Secure Channel: Not Specified 00:32:24.193 Port ID: 1 (0x0001) 00:32:24.193 Controller ID: 65535 (0xffff) 00:32:24.193 Admin Max SQ Size: 32 00:32:24.193 Transport Service Identifier: 4420 00:32:24.193 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:24.193 Transport Address: 10.0.0.1 00:32:24.193 Discovery Log Entry 1 00:32:24.193 ---------------------- 00:32:24.193 Transport Type: 3 (TCP) 00:32:24.193 Address Family: 1 (IPv4) 00:32:24.193 Subsystem Type: 2 (NVM Subsystem) 00:32:24.193 Entry Flags: 00:32:24.193 Duplicate Returned Information: 0 00:32:24.193 Explicit Persistent Connection Support for Discovery: 0 00:32:24.193 Transport Requirements: 00:32:24.193 Secure Channel: Not Specified 00:32:24.193 Port ID: 1 (0x0001) 00:32:24.193 Controller ID: 65535 (0xffff) 00:32:24.193 Admin Max SQ Size: 32 00:32:24.193 Transport Service Identifier: 4420 00:32:24.193 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:24.193 Transport Address: 10.0.0.1 00:32:24.193 14:30:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:24.193 get_feature(0x01) failed 00:32:24.193 get_feature(0x02) failed 00:32:24.193 get_feature(0x04) failed 00:32:24.193 ===================================================== 00:32:24.193 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:24.193 ===================================================== 00:32:24.193 Controller Capabilities/Features 00:32:24.193 ================================ 00:32:24.193 Vendor ID: 0000 00:32:24.193 Subsystem Vendor ID: 0000 00:32:24.193 Serial Number: 8c278ec1e0bb8478034e 00:32:24.193 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:24.193 Firmware Version: 6.8.9-20 00:32:24.193 Recommended Arb Burst: 6 00:32:24.193 IEEE OUI Identifier: 00 00 00 00:32:24.193 Multi-path I/O 00:32:24.193 May have multiple subsystem ports: Yes 00:32:24.193 May have multiple controllers: Yes 00:32:24.193 Associated with SR-IOV VF: No 00:32:24.193 Max Data Transfer Size: Unlimited 00:32:24.193 Max Number of Namespaces: 1024 00:32:24.193 Max Number of I/O Queues: 128 00:32:24.193 NVMe Specification Version (VS): 1.3 00:32:24.193 NVMe Specification Version (Identify): 1.3 00:32:24.193 Maximum Queue Entries: 1024 00:32:24.193 Contiguous Queues Required: No 00:32:24.193 Arbitration Mechanisms Supported 00:32:24.193 Weighted Round Robin: Not Supported 00:32:24.193 Vendor Specific: Not Supported 00:32:24.193 Reset Timeout: 7500 ms 00:32:24.193 Doorbell Stride: 4 bytes 00:32:24.193 NVM Subsystem Reset: Not Supported 00:32:24.193 Command Sets Supported 00:32:24.193 NVM Command Set: Supported 00:32:24.193 Boot Partition: Not Supported 00:32:24.193 Memory Page Size Minimum: 4096 bytes 00:32:24.193 Memory Page Size Maximum: 4096 bytes 00:32:24.193 Persistent Memory Region: Not Supported 00:32:24.193 Optional Asynchronous Events Supported 00:32:24.193 Namespace Attribute Notices: Supported 00:32:24.193 Firmware Activation Notices: Not Supported 00:32:24.193 ANA Change Notices: Supported 00:32:24.193 PLE Aggregate Log Change Notices: Not Supported 00:32:24.193 LBA Status Info Alert Notices: Not Supported 00:32:24.193 EGE Aggregate Log Change Notices: Not Supported 00:32:24.193 Normal NVM Subsystem Shutdown event: Not Supported 00:32:24.193 Zone Descriptor Change Notices: Not Supported 00:32:24.193 Discovery Log Change Notices: Not Supported 00:32:24.193 Controller Attributes 00:32:24.193 128-bit Host Identifier: Supported 00:32:24.193 Non-Operational Permissive Mode: Not Supported 00:32:24.193 NVM Sets: Not Supported 00:32:24.193 Read Recovery Levels: Not Supported 00:32:24.193 Endurance Groups: Not Supported 00:32:24.193 Predictable Latency Mode: Not Supported 00:32:24.193 Traffic Based Keep ALive: Supported 00:32:24.193 Namespace Granularity: Not Supported 00:32:24.193 SQ Associations: Not Supported 00:32:24.193 UUID List: Not Supported 00:32:24.193 Multi-Domain Subsystem: Not Supported 00:32:24.193 Fixed Capacity Management: Not Supported 00:32:24.193 Variable Capacity Management: Not Supported 00:32:24.193 Delete Endurance Group: Not Supported 00:32:24.193 Delete NVM Set: Not Supported 00:32:24.193 Extended LBA Formats Supported: Not Supported 00:32:24.193 Flexible Data Placement Supported: Not Supported 00:32:24.193 00:32:24.193 Controller Memory Buffer Support 00:32:24.193 ================================ 00:32:24.193 Supported: No 00:32:24.193 00:32:24.193 Persistent Memory Region Support 00:32:24.193 ================================ 00:32:24.193 Supported: No 00:32:24.193 00:32:24.193 Admin Command Set Attributes 00:32:24.193 ============================ 00:32:24.193 Security Send/Receive: Not Supported 00:32:24.193 Format NVM: Not Supported 00:32:24.193 Firmware Activate/Download: Not Supported 00:32:24.193 Namespace Management: Not Supported 00:32:24.193 Device Self-Test: Not Supported 00:32:24.193 Directives: Not Supported 00:32:24.193 NVMe-MI: Not Supported 00:32:24.193 Virtualization Management: Not Supported 00:32:24.193 Doorbell Buffer Config: Not Supported 00:32:24.193 Get LBA Status Capability: Not Supported 00:32:24.193 Command & Feature Lockdown Capability: Not Supported 00:32:24.193 Abort Command Limit: 4 00:32:24.193 Async Event Request Limit: 4 00:32:24.193 Number of Firmware Slots: N/A 00:32:24.193 Firmware Slot 1 Read-Only: N/A 00:32:24.193 Firmware Activation Without Reset: N/A 00:32:24.193 Multiple Update Detection Support: N/A 00:32:24.193 Firmware Update Granularity: No Information Provided 00:32:24.193 Per-Namespace SMART Log: Yes 00:32:24.193 Asymmetric Namespace Access Log Page: Supported 00:32:24.193 ANA Transition Time : 10 sec 00:32:24.193 00:32:24.193 Asymmetric Namespace Access Capabilities 00:32:24.193 ANA Optimized State : Supported 00:32:24.193 ANA Non-Optimized State : Supported 00:32:24.193 ANA Inaccessible State : Supported 00:32:24.193 ANA Persistent Loss State : Supported 00:32:24.193 ANA Change State : Supported 00:32:24.193 ANAGRPID is not changed : No 00:32:24.193 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:24.193 00:32:24.193 ANA Group Identifier Maximum : 128 00:32:24.193 Number of ANA Group Identifiers : 128 00:32:24.193 Max Number of Allowed Namespaces : 1024 00:32:24.194 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:24.194 Command Effects Log Page: Supported 00:32:24.194 Get Log Page Extended Data: Supported 00:32:24.194 Telemetry Log Pages: Not Supported 00:32:24.194 Persistent Event Log Pages: Not Supported 00:32:24.194 Supported Log Pages Log Page: May Support 00:32:24.194 Commands Supported & Effects Log Page: Not Supported 00:32:24.194 Feature Identifiers & Effects Log Page:May Support 00:32:24.194 NVMe-MI Commands & Effects Log Page: May Support 00:32:24.194 Data Area 4 for Telemetry Log: Not Supported 00:32:24.194 Error Log Page Entries Supported: 128 00:32:24.194 Keep Alive: Supported 00:32:24.194 Keep Alive Granularity: 1000 ms 00:32:24.194 00:32:24.194 NVM Command Set Attributes 00:32:24.194 ========================== 00:32:24.194 Submission Queue Entry Size 00:32:24.194 Max: 64 00:32:24.194 Min: 64 00:32:24.194 Completion Queue Entry Size 00:32:24.194 Max: 16 00:32:24.194 Min: 16 00:32:24.194 Number of Namespaces: 1024 00:32:24.194 Compare Command: Not Supported 00:32:24.194 Write Uncorrectable Command: Not Supported 00:32:24.194 Dataset Management Command: Supported 00:32:24.194 Write Zeroes Command: Supported 00:32:24.194 Set Features Save Field: Not Supported 00:32:24.194 Reservations: Not Supported 00:32:24.194 Timestamp: Not Supported 00:32:24.194 Copy: Not Supported 00:32:24.194 Volatile Write Cache: Present 00:32:24.194 Atomic Write Unit (Normal): 1 00:32:24.194 Atomic Write Unit (PFail): 1 00:32:24.194 Atomic Compare & Write Unit: 1 00:32:24.194 Fused Compare & Write: Not Supported 00:32:24.194 Scatter-Gather List 00:32:24.194 SGL Command Set: Supported 00:32:24.194 SGL Keyed: Not Supported 00:32:24.194 SGL Bit Bucket Descriptor: Not Supported 00:32:24.194 SGL Metadata Pointer: Not Supported 00:32:24.194 Oversized SGL: Not Supported 00:32:24.194 SGL Metadata Address: Not Supported 00:32:24.194 SGL Offset: Supported 00:32:24.194 Transport SGL Data Block: Not Supported 00:32:24.194 Replay Protected Memory Block: Not Supported 00:32:24.194 00:32:24.194 Firmware Slot Information 00:32:24.194 ========================= 00:32:24.194 Active slot: 0 00:32:24.194 00:32:24.194 Asymmetric Namespace Access 00:32:24.194 =========================== 00:32:24.194 Change Count : 0 00:32:24.194 Number of ANA Group Descriptors : 1 00:32:24.194 ANA Group Descriptor : 0 00:32:24.194 ANA Group ID : 1 00:32:24.194 Number of NSID Values : 1 00:32:24.194 Change Count : 0 00:32:24.194 ANA State : 1 00:32:24.194 Namespace Identifier : 1 00:32:24.194 00:32:24.194 Commands Supported and Effects 00:32:24.194 ============================== 00:32:24.194 Admin Commands 00:32:24.194 -------------- 00:32:24.194 Get Log Page (02h): Supported 00:32:24.194 Identify (06h): Supported 00:32:24.194 Abort (08h): Supported 00:32:24.194 Set Features (09h): Supported 00:32:24.194 Get Features (0Ah): Supported 00:32:24.194 Asynchronous Event Request (0Ch): Supported 00:32:24.194 Keep Alive (18h): Supported 00:32:24.194 I/O Commands 00:32:24.194 ------------ 00:32:24.194 Flush (00h): Supported 00:32:24.194 Write (01h): Supported LBA-Change 00:32:24.194 Read (02h): Supported 00:32:24.194 Write Zeroes (08h): Supported LBA-Change 00:32:24.194 Dataset Management (09h): Supported 00:32:24.194 00:32:24.194 Error Log 00:32:24.194 ========= 00:32:24.194 Entry: 0 00:32:24.194 Error Count: 0x3 00:32:24.194 Submission Queue Id: 0x0 00:32:24.194 Command Id: 0x5 00:32:24.194 Phase Bit: 0 00:32:24.194 Status Code: 0x2 00:32:24.194 Status Code Type: 0x0 00:32:24.194 Do Not Retry: 1 00:32:24.194 Error Location: 0x28 00:32:24.194 LBA: 0x0 00:32:24.194 Namespace: 0x0 00:32:24.194 Vendor Log Page: 0x0 00:32:24.194 ----------- 00:32:24.194 Entry: 1 00:32:24.194 Error Count: 0x2 00:32:24.194 Submission Queue Id: 0x0 00:32:24.194 Command Id: 0x5 00:32:24.194 Phase Bit: 0 00:32:24.194 Status Code: 0x2 00:32:24.194 Status Code Type: 0x0 00:32:24.194 Do Not Retry: 1 00:32:24.194 Error Location: 0x28 00:32:24.194 LBA: 0x0 00:32:24.194 Namespace: 0x0 00:32:24.194 Vendor Log Page: 0x0 00:32:24.194 ----------- 00:32:24.194 Entry: 2 00:32:24.194 Error Count: 0x1 00:32:24.194 Submission Queue Id: 0x0 00:32:24.194 Command Id: 0x4 00:32:24.194 Phase Bit: 0 00:32:24.194 Status Code: 0x2 00:32:24.194 Status Code Type: 0x0 00:32:24.194 Do Not Retry: 1 00:32:24.194 Error Location: 0x28 00:32:24.194 LBA: 0x0 00:32:24.194 Namespace: 0x0 00:32:24.194 Vendor Log Page: 0x0 00:32:24.194 00:32:24.194 Number of Queues 00:32:24.194 ================ 00:32:24.194 Number of I/O Submission Queues: 128 00:32:24.194 Number of I/O Completion Queues: 128 00:32:24.194 00:32:24.194 ZNS Specific Controller Data 00:32:24.194 ============================ 00:32:24.194 Zone Append Size Limit: 0 00:32:24.194 00:32:24.194 00:32:24.194 Active Namespaces 00:32:24.194 ================= 00:32:24.194 get_feature(0x05) failed 00:32:24.194 Namespace ID:1 00:32:24.194 Command Set Identifier: NVM (00h) 00:32:24.194 Deallocate: Supported 00:32:24.194 Deallocated/Unwritten Error: Not Supported 00:32:24.194 Deallocated Read Value: Unknown 00:32:24.194 Deallocate in Write Zeroes: Not Supported 00:32:24.194 Deallocated Guard Field: 0xFFFF 00:32:24.194 Flush: Supported 00:32:24.194 Reservation: Not Supported 00:32:24.194 Namespace Sharing Capabilities: Multiple Controllers 00:32:24.194 Size (in LBAs): 3750748848 (1788GiB) 00:32:24.194 Capacity (in LBAs): 3750748848 (1788GiB) 00:32:24.194 Utilization (in LBAs): 3750748848 (1788GiB) 00:32:24.194 UUID: 5b2b8d31-9ca0-496b-8632-3261d5ec3f4f 00:32:24.194 Thin Provisioning: Not Supported 00:32:24.194 Per-NS Atomic Units: Yes 00:32:24.194 Atomic Write Unit (Normal): 8 00:32:24.194 Atomic Write Unit (PFail): 8 00:32:24.194 Preferred Write Granularity: 8 00:32:24.194 Atomic Compare & Write Unit: 8 00:32:24.194 Atomic Boundary Size (Normal): 0 00:32:24.194 Atomic Boundary Size (PFail): 0 00:32:24.194 Atomic Boundary Offset: 0 00:32:24.194 NGUID/EUI64 Never Reused: No 00:32:24.194 ANA group ID: 1 00:32:24.194 Namespace Write Protected: No 00:32:24.194 Number of LBA Formats: 1 00:32:24.194 Current LBA Format: LBA Format #00 00:32:24.194 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:24.194 00:32:24.194 14:30:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:24.194 14:30:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:24.194 14:30:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:32:24.194 14:30:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:24.194 14:30:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:32:24.194 14:30:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:24.194 14:30:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:24.194 rmmod nvme_tcp 00:32:24.194 rmmod nvme_fabrics 00:32:24.194 14:30:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:24.194 14:30:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:32:24.194 14:30:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:32:24.194 14:30:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:24.194 14:30:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:24.194 14:30:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:24.194 14:30:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:24.194 14:30:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:32:24.194 14:30:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:32:24.194 14:30:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:24.194 14:30:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:32:24.194 14:30:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:24.194 14:30:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:24.194 14:30:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.194 14:30:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:24.194 14:30:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:26.770 14:30:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:26.770 14:30:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:26.770 14:30:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:26.770 14:30:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:32:26.770 14:30:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:26.770 14:30:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:26.770 14:30:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:26.770 14:30:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:26.770 14:30:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:32:26.770 14:30:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:32:26.770 14:30:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:30.078 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:30.078 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:30.078 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:30.078 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:30.078 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:30.078 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:30.078 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:30.078 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:30.078 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:30.078 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:30.078 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:30.078 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:30.078 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:30.078 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:30.078 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:30.078 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:30.078 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:30.651 00:32:30.651 real 0m19.784s 00:32:30.651 user 0m5.385s 00:32:30.651 sys 0m11.404s 00:32:30.651 14:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:30.651 14:30:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:30.651 ************************************ 00:32:30.651 END TEST nvmf_identify_kernel_target 00:32:30.651 ************************************ 00:32:30.651 14:30:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:30.651 14:30:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:30.651 14:30:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:30.651 14:30:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.651 ************************************ 00:32:30.651 START TEST nvmf_auth_host 00:32:30.651 ************************************ 00:32:30.651 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:30.651 * Looking for test storage... 00:32:30.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:30.651 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:30.651 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:32:30.651 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:30.651 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:30.651 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:30.651 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:30.651 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:30.651 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:30.651 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:30.651 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:30.651 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:30.652 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:30.652 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:30.652 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:30.652 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:30.652 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:32:30.652 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:32:30.652 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:30.652 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:30.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.914 --rc genhtml_branch_coverage=1 00:32:30.914 --rc genhtml_function_coverage=1 00:32:30.914 --rc genhtml_legend=1 00:32:30.914 --rc geninfo_all_blocks=1 00:32:30.914 --rc geninfo_unexecuted_blocks=1 00:32:30.914 00:32:30.914 ' 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:30.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.914 --rc genhtml_branch_coverage=1 00:32:30.914 --rc genhtml_function_coverage=1 00:32:30.914 --rc genhtml_legend=1 00:32:30.914 --rc geninfo_all_blocks=1 00:32:30.914 --rc geninfo_unexecuted_blocks=1 00:32:30.914 00:32:30.914 ' 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:30.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.914 --rc genhtml_branch_coverage=1 00:32:30.914 --rc genhtml_function_coverage=1 00:32:30.914 --rc genhtml_legend=1 00:32:30.914 --rc geninfo_all_blocks=1 00:32:30.914 --rc geninfo_unexecuted_blocks=1 00:32:30.914 00:32:30.914 ' 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:30.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.914 --rc genhtml_branch_coverage=1 00:32:30.914 --rc genhtml_function_coverage=1 00:32:30.914 --rc genhtml_legend=1 00:32:30.914 --rc geninfo_all_blocks=1 00:32:30.914 --rc geninfo_unexecuted_blocks=1 00:32:30.914 00:32:30.914 ' 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:30.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:30.914 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:30.915 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:30.915 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:30.915 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:30.915 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:30.915 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:30.915 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:30.915 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:30.915 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:30.915 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.915 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:30.915 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.915 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:30.915 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:30.915 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:32:30.915 14:30:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.079 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:39.079 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:32:39.079 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:39.079 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:39.079 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:39.079 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:39.079 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:39.079 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:32:39.079 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:39.079 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:32:39.079 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:32:39.079 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:32:39.079 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:32:39.079 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:32:39.079 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:32:39.079 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:39.079 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:39.079 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:39.079 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:39.079 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:39.079 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:39.079 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:39.079 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:39.079 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:39.080 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:39.080 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:39.080 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:39.080 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:39.080 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:39.080 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:39.080 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:39.080 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:39.080 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:39.080 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:39.080 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:39.080 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:39.080 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:39.080 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:39.080 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:39.080 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:39.080 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:39.080 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:39.080 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:39.080 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:39.080 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:39.080 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:39.080 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:39.080 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:39.080 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:39.080 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:39.081 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:39.081 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:39.081 14:30:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:39.081 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:39.081 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:39.081 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:39.081 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:39.081 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:39.081 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:39.081 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:39.081 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:39.081 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:39.081 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:32:39.081 00:32:39.081 --- 10.0.0.2 ping statistics --- 00:32:39.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.081 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:32:39.081 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:39.081 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:39.081 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:32:39.081 00:32:39.081 --- 10.0.0.1 ping statistics --- 00:32:39.081 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:39.081 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:32:39.081 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:39.081 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:32:39.081 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:39.081 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:39.081 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:39.081 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:39.081 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:39.081 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:39.081 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:39.081 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:39.081 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:39.081 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:39.081 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.081 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3589007 00:32:39.081 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3589007 00:32:39.081 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:39.082 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3589007 ']' 00:32:39.082 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:39.082 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:39.082 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:39.082 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:39.082 14:30:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.082 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:39.082 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:32:39.082 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:39.082 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:39.082 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b8bbb679697486932f17c4721ec4d7b4 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Yo4 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b8bbb679697486932f17c4721ec4d7b4 0 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b8bbb679697486932f17c4721ec4d7b4 0 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b8bbb679697486932f17c4721ec4d7b4 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Yo4 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Yo4 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Yo4 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=62e7d7fe372258be6734da7344cd7f9cb308bc23a401b1e41e217876cc025357 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.m9K 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 62e7d7fe372258be6734da7344cd7f9cb308bc23a401b1e41e217876cc025357 3 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 62e7d7fe372258be6734da7344cd7f9cb308bc23a401b1e41e217876cc025357 3 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=62e7d7fe372258be6734da7344cd7f9cb308bc23a401b1e41e217876cc025357 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.m9K 00:32:39.350 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.m9K 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.m9K 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c499bf3e995b531e43d35fa95e5030457766c67be5cab9c9 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.uMK 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c499bf3e995b531e43d35fa95e5030457766c67be5cab9c9 0 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c499bf3e995b531e43d35fa95e5030457766c67be5cab9c9 0 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c499bf3e995b531e43d35fa95e5030457766c67be5cab9c9 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.uMK 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.uMK 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.uMK 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d802f758b639b9f364772fd354a5bee0bba300f75056b02f 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.u4D 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d802f758b639b9f364772fd354a5bee0bba300f75056b02f 2 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d802f758b639b9f364772fd354a5bee0bba300f75056b02f 2 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d802f758b639b9f364772fd354a5bee0bba300f75056b02f 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:32:39.351 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.u4D 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.u4D 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.u4D 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f485da0091fd21f2ca225c476b161515 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Csq 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f485da0091fd21f2ca225c476b161515 1 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f485da0091fd21f2ca225c476b161515 1 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f485da0091fd21f2ca225c476b161515 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Csq 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Csq 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Csq 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ed96119875ac9b2ff64519464827381f 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.K3F 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ed96119875ac9b2ff64519464827381f 1 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ed96119875ac9b2ff64519464827381f 1 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ed96119875ac9b2ff64519464827381f 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.K3F 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.K3F 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.K3F 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7cfebf59b402cf9e777aeaba24e20730ebc2a197f1d9726f 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Uft 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7cfebf59b402cf9e777aeaba24e20730ebc2a197f1d9726f 2 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7cfebf59b402cf9e777aeaba24e20730ebc2a197f1d9726f 2 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7cfebf59b402cf9e777aeaba24e20730ebc2a197f1d9726f 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Uft 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Uft 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Uft 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=59f0df7a42f7075c175912f69d53eea1 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.w67 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 59f0df7a42f7075c175912f69d53eea1 0 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 59f0df7a42f7075c175912f69d53eea1 0 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=59f0df7a42f7075c175912f69d53eea1 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:32:39.613 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.w67 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.w67 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.w67 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b3411c97f04242f87ff8b2f733d7ef2658d1c8abf1fac721868dfcae56c4fde0 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.pWU 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b3411c97f04242f87ff8b2f733d7ef2658d1c8abf1fac721868dfcae56c4fde0 3 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b3411c97f04242f87ff8b2f733d7ef2658d1c8abf1fac721868dfcae56c4fde0 3 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b3411c97f04242f87ff8b2f733d7ef2658d1c8abf1fac721868dfcae56c4fde0 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.pWU 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.pWU 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.pWU 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3589007 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3589007 ']' 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:39.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:39.874 14:30:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Yo4 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.m9K ]] 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.m9K 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.uMK 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.u4D ]] 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.u4D 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Csq 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.K3F ]] 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.K3F 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Uft 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.w67 ]] 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.w67 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.pWU 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:40.136 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:40.137 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:32:40.137 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:32:40.137 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:32:40.137 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:40.137 14:30:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:43.437 Waiting for block devices as requested 00:32:43.697 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:43.697 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:43.697 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:43.955 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:43.956 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:43.956 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:44.215 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:44.215 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:44.215 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:44.474 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:44.474 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:44.474 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:44.734 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:44.734 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:44.734 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:44.994 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:44.994 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:45.940 No valid GPT data, bailing 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:32:45.940 00:32:45.940 Discovery Log Number of Records 2, Generation counter 2 00:32:45.940 =====Discovery Log Entry 0====== 00:32:45.940 trtype: tcp 00:32:45.940 adrfam: ipv4 00:32:45.940 subtype: current discovery subsystem 00:32:45.940 treq: not specified, sq flow control disable supported 00:32:45.940 portid: 1 00:32:45.940 trsvcid: 4420 00:32:45.940 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:45.940 traddr: 10.0.0.1 00:32:45.940 eflags: none 00:32:45.940 sectype: none 00:32:45.940 =====Discovery Log Entry 1====== 00:32:45.940 trtype: tcp 00:32:45.940 adrfam: ipv4 00:32:45.940 subtype: nvme subsystem 00:32:45.940 treq: not specified, sq flow control disable supported 00:32:45.940 portid: 1 00:32:45.940 trsvcid: 4420 00:32:45.940 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:45.940 traddr: 10.0.0.1 00:32:45.940 eflags: none 00:32:45.940 sectype: none 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: ]] 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.940 14:30:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.202 nvme0n1 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: ]] 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.202 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.463 nvme0n1 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: ]] 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.463 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.725 nvme0n1 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: ]] 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.725 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.987 nvme0n1 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: ]] 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.987 14:30:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.987 nvme0n1 00:32:46.987 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.987 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.987 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.987 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.987 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.250 nvme0n1 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.250 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: ]] 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.512 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.513 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.513 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:47.513 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:47.513 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:47.513 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.513 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.513 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:47.513 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.513 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:47.513 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:47.513 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:47.513 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:47.513 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.513 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.513 nvme0n1 00:32:47.513 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.513 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.513 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.513 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.513 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.513 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: ]] 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.774 nvme0n1 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.774 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: ]] 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.036 14:30:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.036 nvme0n1 00:32:48.036 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.036 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.036 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.036 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.036 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.036 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: ]] 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.298 nvme0n1 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.298 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.558 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.558 nvme0n1 00:32:48.559 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.559 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.559 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.559 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.559 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.559 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: ]] 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.820 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.082 nvme0n1 00:32:49.082 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.082 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.082 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.082 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.082 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.082 14:30:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: ]] 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.082 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.343 nvme0n1 00:32:49.343 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.343 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.343 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.343 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.343 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.343 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.343 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.343 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.343 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.343 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.343 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.343 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.343 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:49.343 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.343 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:49.343 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:49.343 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:49.343 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:32:49.343 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:32:49.343 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:49.343 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:49.343 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:32:49.343 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: ]] 00:32:49.343 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:32:49.343 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:49.343 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.344 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:49.344 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:49.344 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:49.344 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.344 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:49.344 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.344 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.344 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.344 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.344 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:49.344 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:49.344 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:49.344 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.344 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.344 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:49.344 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.344 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:49.344 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:49.344 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:49.344 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:49.344 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.344 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.603 nvme0n1 00:32:49.603 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.603 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.603 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.603 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.603 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.603 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.864 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.864 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.864 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.864 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.864 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.864 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.864 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:49.864 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.864 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:49.864 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:49.864 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:49.864 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:32:49.864 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:32:49.864 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:49.864 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:49.864 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:32:49.865 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: ]] 00:32:49.865 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:32:49.865 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:49.865 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.865 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:49.865 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:49.865 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:49.865 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.865 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:49.865 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.865 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.865 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.865 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.865 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:49.865 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:49.865 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:49.865 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.865 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.865 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:49.865 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.865 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:49.865 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:49.865 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:49.865 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:49.865 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.865 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.126 nvme0n1 00:32:50.126 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.126 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.126 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.126 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.126 14:30:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.126 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.387 nvme0n1 00:32:50.387 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.387 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.387 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.387 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.387 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.387 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.387 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.387 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.387 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.387 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.387 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.387 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:50.387 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.387 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:50.387 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.387 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:50.387 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: ]] 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.388 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.960 nvme0n1 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: ]] 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.960 14:30:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.537 nvme0n1 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: ]] 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.537 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.815 nvme0n1 00:32:51.815 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.815 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.815 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.815 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.815 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.815 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.815 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.815 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.815 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.815 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: ]] 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.132 14:30:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.395 nvme0n1 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.395 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.968 nvme0n1 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: ]] 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.968 14:30:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.541 nvme0n1 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: ]] 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.541 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.802 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.802 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.802 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:53.802 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:53.802 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:53.802 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.802 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.802 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:53.802 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.802 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:53.802 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:53.802 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:53.802 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:53.802 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.802 14:30:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.373 nvme0n1 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: ]] 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.373 14:30:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.946 nvme0n1 00:32:54.946 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.946 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.946 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.946 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.946 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.946 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: ]] 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.207 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.779 nvme0n1 00:32:55.779 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.779 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.779 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.779 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.779 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.779 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.779 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.779 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.779 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.779 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.779 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.779 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.779 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:55.779 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.779 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.779 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:55.779 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:55.779 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:32:55.779 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:55.779 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.779 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:55.779 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:32:55.779 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:55.779 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:55.779 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.780 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:55.780 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:55.780 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:55.780 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.780 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:55.780 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.780 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.780 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.780 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.780 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:55.780 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:55.780 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:55.780 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.780 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.780 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:55.780 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.780 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:55.780 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:55.780 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:55.780 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:55.780 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.780 14:31:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.722 nvme0n1 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: ]] 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.722 nvme0n1 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: ]] 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.722 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.983 nvme0n1 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: ]] 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.983 14:31:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.983 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.983 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.983 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:56.983 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:56.983 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:56.983 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.983 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.983 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:56.983 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.983 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:56.983 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:56.983 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:56.983 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:56.983 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.983 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.244 nvme0n1 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: ]] 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.244 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.505 nvme0n1 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.505 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.767 nvme0n1 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: ]] 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.767 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.768 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:57.768 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.768 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:57.768 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:57.768 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:57.768 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:57.768 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.768 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.028 nvme0n1 00:32:58.028 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.028 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.028 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.028 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.028 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.028 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.028 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.028 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.028 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.028 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.028 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.028 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.028 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:58.028 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.028 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:58.028 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:58.028 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:58.028 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:32:58.028 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:32:58.028 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:58.028 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:58.028 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:32:58.028 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: ]] 00:32:58.029 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:32:58.029 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:58.029 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.029 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:58.029 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:58.029 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:58.029 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.029 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:58.029 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.029 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.029 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.029 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.029 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:58.029 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:58.029 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:58.029 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.029 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.029 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:58.029 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.029 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:58.029 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:58.029 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:58.029 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:58.029 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.029 14:31:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.289 nvme0n1 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: ]] 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:58.289 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:58.290 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:58.290 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:58.290 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.290 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.550 nvme0n1 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: ]] 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.550 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.810 nvme0n1 00:32:58.810 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.810 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.810 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.811 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.071 nvme0n1 00:32:59.071 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.071 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.071 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.071 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.071 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.071 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.071 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.071 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.071 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.071 14:31:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: ]] 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.071 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.331 nvme0n1 00:32:59.331 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.331 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.331 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.331 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.331 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.331 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.331 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.331 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.331 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: ]] 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.332 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.593 nvme0n1 00:32:59.593 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.593 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.593 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.593 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.593 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.593 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.593 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.593 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.593 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.593 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.853 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: ]] 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.854 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.115 nvme0n1 00:33:00.115 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.115 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.115 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.115 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.115 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.115 14:31:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: ]] 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.115 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.376 nvme0n1 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.376 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.636 nvme0n1 00:33:00.636 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.636 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.636 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.636 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.636 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.636 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.636 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: ]] 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.637 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:00.897 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.897 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:00.897 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:00.897 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:00.897 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:00.897 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.897 14:31:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.157 nvme0n1 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: ]] 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.157 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.728 nvme0n1 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: ]] 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.728 14:31:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.300 nvme0n1 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: ]] 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.300 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.600 nvme0n1 00:33:02.600 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.600 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.600 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.600 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.600 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.600 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.600 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.600 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.600 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.600 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.862 14:31:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.122 nvme0n1 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: ]] 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.122 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.383 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.383 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.383 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:03.383 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:03.383 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:03.383 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.383 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.383 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:03.383 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.383 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:03.383 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:03.383 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:03.383 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:03.383 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.383 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.954 nvme0n1 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: ]] 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.954 14:31:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.525 nvme0n1 00:33:04.525 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.525 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.525 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.525 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.525 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.525 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: ]] 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.786 14:31:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.358 nvme0n1 00:33:05.358 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.358 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.358 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.358 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.358 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.358 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.358 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.358 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.358 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.358 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.358 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.358 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.358 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:33:05.358 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.358 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:05.358 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:05.358 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:05.358 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:33:05.358 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:33:05.358 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:05.358 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: ]] 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.359 14:31:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.303 nvme0n1 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.303 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.876 nvme0n1 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: ]] 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.876 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.137 nvme0n1 00:33:07.138 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.138 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.138 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.138 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.138 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.138 14:31:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: ]] 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.138 nvme0n1 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.138 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.399 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.399 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.399 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.399 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.399 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.399 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.399 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.399 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:33:07.399 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.399 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:07.399 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:07.399 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:07.399 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: ]] 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.400 nvme0n1 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.400 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: ]] 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.661 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.661 nvme0n1 00:33:07.662 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.662 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.662 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.662 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.662 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.662 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.662 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.662 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.662 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.662 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.662 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.662 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.662 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:33:07.662 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.662 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.923 nvme0n1 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: ]] 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.923 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:07.924 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.924 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.924 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.924 14:31:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.924 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:07.924 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:07.924 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:07.924 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.924 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.924 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:07.924 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.924 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:07.924 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:07.924 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:07.924 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:07.924 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.924 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.185 nvme0n1 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: ]] 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.185 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.447 nvme0n1 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: ]] 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.447 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.708 nvme0n1 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: ]] 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.708 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.970 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.970 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:08.970 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:08.970 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:08.970 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.970 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.970 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:08.970 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.970 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:08.970 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:08.970 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:08.970 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:08.970 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.970 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.970 nvme0n1 00:33:08.970 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.970 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.970 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.970 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.970 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.970 14:31:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.970 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.970 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.970 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.970 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.970 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.970 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.970 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:08.970 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.970 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:08.970 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:08.970 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:08.970 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:33:08.970 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:08.970 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:08.970 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:08.970 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:33:08.970 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:08.970 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:08.970 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.970 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:08.970 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:08.970 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:08.970 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.970 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:08.970 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.970 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.232 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.232 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.232 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:09.232 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:09.232 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:09.232 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.232 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.232 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:09.232 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.232 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:09.232 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:09.232 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:09.232 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:09.232 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.232 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.232 nvme0n1 00:33:09.232 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.232 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.232 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.232 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.232 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.232 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.232 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.232 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.232 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.232 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.233 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.233 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:09.233 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.233 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:09.233 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.233 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:09.233 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:09.233 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:09.233 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:33:09.233 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:33:09.233 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:09.233 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:09.233 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:33:09.233 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: ]] 00:33:09.233 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:33:09.233 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:09.233 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.233 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:09.233 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:09.233 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:09.233 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.233 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:09.233 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.233 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.494 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.494 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.494 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:09.494 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:09.494 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:09.494 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.494 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.494 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:09.494 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.495 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:09.495 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:09.495 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:09.495 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:09.495 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.495 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.495 nvme0n1 00:33:09.495 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.495 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.495 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.495 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: ]] 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.756 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.017 nvme0n1 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: ]] 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:10.017 14:31:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:10.017 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.017 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.278 nvme0n1 00:33:10.278 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.278 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.278 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.278 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.278 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.278 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.278 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.278 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: ]] 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.279 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.540 nvme0n1 00:33:10.540 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.540 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.540 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.540 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.540 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.540 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.800 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.061 nvme0n1 00:33:11.061 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.061 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.061 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.061 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.061 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.061 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.061 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.061 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.061 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.061 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.061 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.061 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:11.061 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.061 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:11.061 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.061 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:11.061 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:11.061 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:11.061 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:33:11.061 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:33:11.061 14:31:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:11.061 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:11.061 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:33:11.061 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: ]] 00:33:11.061 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:33:11.061 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:11.061 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.061 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:11.061 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:11.061 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:11.061 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.062 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:11.062 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.062 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.062 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.062 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.062 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:11.062 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:11.062 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:11.062 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.062 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.062 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:11.062 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.062 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:11.062 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:11.062 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:11.062 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:11.062 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.062 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.632 nvme0n1 00:33:11.632 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.632 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.632 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.632 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.632 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.632 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.632 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.632 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.632 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.632 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.632 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.632 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.632 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:11.632 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.632 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:11.632 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:11.632 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:11.632 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:33:11.632 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:33:11.632 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: ]] 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.633 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.893 nvme0n1 00:33:11.893 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.893 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.893 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.893 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.893 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.893 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.893 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.893 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.893 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.893 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.154 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.154 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.154 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:12.154 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.154 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:12.154 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:12.154 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:12.154 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:33:12.154 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:33:12.154 14:31:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: ]] 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.154 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.415 nvme0n1 00:33:12.415 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.415 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.415 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.415 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.415 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.415 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.415 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.415 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.415 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.415 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.415 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.415 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.415 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:12.415 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: ]] 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.675 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.936 nvme0n1 00:33:12.936 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.936 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.936 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.936 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.936 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.936 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.936 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.936 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.936 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.936 14:31:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:12.936 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:12.937 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:13.197 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:13.198 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.198 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.459 nvme0n1 00:33:13.459 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.459 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.459 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.459 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.459 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.459 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.459 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.459 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.459 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.459 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.459 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.459 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:13.459 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjhiYmI2Nzk2OTc0ODY5MzJmMTdjNDcyMWVjNGQ3YjQGQgI+: 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: ]] 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjJlN2Q3ZmUzNzIyNThiZTY3MzRkYTczNDRjZDdmOWNiMzA4YmMyM2E0MDFiMWU0MWUyMTc4NzZjYzAyNTM1Nzz4Ev8=: 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.460 14:31:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.402 nvme0n1 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: ]] 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.402 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.973 nvme0n1 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: ]] 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.973 14:31:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.543 nvme0n1 00:33:15.544 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.544 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:15.544 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:15.544 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.544 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.544 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2NmZWJmNTliNDAyY2Y5ZTc3N2FlYWJhMjRlMjA3MzBlYmMyYTE5N2YxZDk3MjZmy4vuTg==: 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: ]] 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTlmMGRmN2E0MmY3MDc1YzE3NTkxMmY2OWQ1M2VlYTFmjFXy: 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.805 14:31:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.377 nvme0n1 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YjM0MTFjOTdmMDQyNDJmODdmZjhiMmY3MzNkN2VmMjY1OGQxYzhhYmYxZmFjNzIxODY4ZGZjYWU1NmM0ZmRlMKZbRxg=: 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.377 14:31:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.318 nvme0n1 00:33:17.318 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.318 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.318 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.318 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.318 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.318 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.318 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.318 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.318 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.318 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.318 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.318 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:17.318 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.318 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:17.318 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:17.318 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:17.318 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:33:17.318 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:33:17.318 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:17.318 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: ]] 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.319 request: 00:33:17.319 { 00:33:17.319 "name": "nvme0", 00:33:17.319 "trtype": "tcp", 00:33:17.319 "traddr": "10.0.0.1", 00:33:17.319 "adrfam": "ipv4", 00:33:17.319 "trsvcid": "4420", 00:33:17.319 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:17.319 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:17.319 "prchk_reftag": false, 00:33:17.319 "prchk_guard": false, 00:33:17.319 "hdgst": false, 00:33:17.319 "ddgst": false, 00:33:17.319 "allow_unrecognized_csi": false, 00:33:17.319 "method": "bdev_nvme_attach_controller", 00:33:17.319 "req_id": 1 00:33:17.319 } 00:33:17.319 Got JSON-RPC error response 00:33:17.319 response: 00:33:17.319 { 00:33:17.319 "code": -5, 00:33:17.319 "message": "Input/output error" 00:33:17.319 } 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.319 request: 00:33:17.319 { 00:33:17.319 "name": "nvme0", 00:33:17.319 "trtype": "tcp", 00:33:17.319 "traddr": "10.0.0.1", 00:33:17.319 "adrfam": "ipv4", 00:33:17.319 "trsvcid": "4420", 00:33:17.319 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:17.319 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:17.319 "prchk_reftag": false, 00:33:17.319 "prchk_guard": false, 00:33:17.319 "hdgst": false, 00:33:17.319 "ddgst": false, 00:33:17.319 "dhchap_key": "key2", 00:33:17.319 "allow_unrecognized_csi": false, 00:33:17.319 "method": "bdev_nvme_attach_controller", 00:33:17.319 "req_id": 1 00:33:17.319 } 00:33:17.319 Got JSON-RPC error response 00:33:17.319 response: 00:33:17.319 { 00:33:17.319 "code": -5, 00:33:17.319 "message": "Input/output error" 00:33:17.319 } 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:17.319 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.320 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:17.320 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.320 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.581 request: 00:33:17.581 { 00:33:17.581 "name": "nvme0", 00:33:17.581 "trtype": "tcp", 00:33:17.581 "traddr": "10.0.0.1", 00:33:17.581 "adrfam": "ipv4", 00:33:17.581 "trsvcid": "4420", 00:33:17.581 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:17.581 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:17.581 "prchk_reftag": false, 00:33:17.581 "prchk_guard": false, 00:33:17.581 "hdgst": false, 00:33:17.581 "ddgst": false, 00:33:17.581 "dhchap_key": "key1", 00:33:17.581 "dhchap_ctrlr_key": "ckey2", 00:33:17.581 "allow_unrecognized_csi": false, 00:33:17.581 "method": "bdev_nvme_attach_controller", 00:33:17.581 "req_id": 1 00:33:17.581 } 00:33:17.581 Got JSON-RPC error response 00:33:17.581 response: 00:33:17.581 { 00:33:17.581 "code": -5, 00:33:17.581 "message": "Input/output error" 00:33:17.581 } 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.581 nvme0n1 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: ]] 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.581 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.842 request: 00:33:17.842 { 00:33:17.842 "name": "nvme0", 00:33:17.842 "dhchap_key": "key1", 00:33:17.842 "dhchap_ctrlr_key": "ckey2", 00:33:17.842 "method": "bdev_nvme_set_keys", 00:33:17.842 "req_id": 1 00:33:17.842 } 00:33:17.842 Got JSON-RPC error response 00:33:17.842 response: 00:33:17.842 { 00:33:17.842 "code": -13, 00:33:17.842 "message": "Permission denied" 00:33:17.842 } 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:33:17.842 14:31:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:33:18.783 14:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.783 14:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:33:18.784 14:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.784 14:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.784 14:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.044 14:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:33:19.044 14:31:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzQ5OWJmM2U5OTViNTMxZTQzZDM1ZmE5NWU1MDMwNDU3NzY2YzY3YmU1Y2FiOWM5Oyzh7Q==: 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: ]] 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZDgwMmY3NThiNjM5YjlmMzY0NzcyZmQzNTRhNWJlZTBiYmEzMDBmNzUwNTZiMDJmasmtxg==: 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.986 14:31:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.247 nvme0n1 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjQ4NWRhMDA5MWZkMjFmMmNhMjI1YzQ3NmIxNjE1MTWEhovR: 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: ]] 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWQ5NjExOTg3NWFjOWIyZmY2NDUxOTQ2NDgyNzM4MWZGyU4+: 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.247 request: 00:33:20.247 { 00:33:20.247 "name": "nvme0", 00:33:20.247 "dhchap_key": "key2", 00:33:20.247 "dhchap_ctrlr_key": "ckey1", 00:33:20.247 "method": "bdev_nvme_set_keys", 00:33:20.247 "req_id": 1 00:33:20.247 } 00:33:20.247 Got JSON-RPC error response 00:33:20.247 response: 00:33:20.247 { 00:33:20.247 "code": -13, 00:33:20.247 "message": "Permission denied" 00:33:20.247 } 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:33:20.247 14:31:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:33:21.190 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:33:21.190 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:33:21.190 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.190 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:21.190 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.190 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:33:21.190 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:33:21.190 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:33:21.190 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:21.190 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:21.190 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:33:21.190 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:21.190 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:33:21.190 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:21.190 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:21.190 rmmod nvme_tcp 00:33:21.451 rmmod nvme_fabrics 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3589007 ']' 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3589007 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3589007 ']' 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3589007 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3589007 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3589007' 00:33:21.451 killing process with pid 3589007 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3589007 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3589007 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:21.451 14:31:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.997 14:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:23.997 14:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:23.997 14:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:23.997 14:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:23.997 14:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:23.997 14:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:33:23.997 14:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:23.997 14:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:23.997 14:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:23.997 14:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:23.997 14:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:33:23.997 14:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:33:23.997 14:31:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:27.305 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:27.305 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:27.305 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:27.305 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:27.305 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:27.305 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:27.305 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:27.305 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:27.305 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:27.305 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:27.305 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:27.305 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:27.305 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:27.305 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:27.305 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:27.305 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:27.305 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:33:27.877 14:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Yo4 /tmp/spdk.key-null.uMK /tmp/spdk.key-sha256.Csq /tmp/spdk.key-sha384.Uft /tmp/spdk.key-sha512.pWU /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:27.877 14:31:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:31.180 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:33:31.180 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:33:31.180 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:33:31.180 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:33:31.180 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:33:31.180 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:33:31.180 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:33:31.180 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:33:31.180 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:33:31.180 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:33:31.180 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:33:31.180 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:33:31.180 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:33:31.180 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:33:31.180 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:33:31.180 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:33:31.180 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:33:31.752 00:33:31.752 real 1m0.997s 00:33:31.752 user 0m54.757s 00:33:31.752 sys 0m16.156s 00:33:31.752 14:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:31.752 14:31:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.752 ************************************ 00:33:31.752 END TEST nvmf_auth_host 00:33:31.752 ************************************ 00:33:31.752 14:31:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:33:31.752 14:31:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:31.752 14:31:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:31.752 14:31:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:31.752 14:31:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:31.752 ************************************ 00:33:31.752 START TEST nvmf_digest 00:33:31.752 ************************************ 00:33:31.752 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:31.752 * Looking for test storage... 00:33:31.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:31.752 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:31.752 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:31.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.753 --rc genhtml_branch_coverage=1 00:33:31.753 --rc genhtml_function_coverage=1 00:33:31.753 --rc genhtml_legend=1 00:33:31.753 --rc geninfo_all_blocks=1 00:33:31.753 --rc geninfo_unexecuted_blocks=1 00:33:31.753 00:33:31.753 ' 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:31.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.753 --rc genhtml_branch_coverage=1 00:33:31.753 --rc genhtml_function_coverage=1 00:33:31.753 --rc genhtml_legend=1 00:33:31.753 --rc geninfo_all_blocks=1 00:33:31.753 --rc geninfo_unexecuted_blocks=1 00:33:31.753 00:33:31.753 ' 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:31.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.753 --rc genhtml_branch_coverage=1 00:33:31.753 --rc genhtml_function_coverage=1 00:33:31.753 --rc genhtml_legend=1 00:33:31.753 --rc geninfo_all_blocks=1 00:33:31.753 --rc geninfo_unexecuted_blocks=1 00:33:31.753 00:33:31.753 ' 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:31.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:31.753 --rc genhtml_branch_coverage=1 00:33:31.753 --rc genhtml_function_coverage=1 00:33:31.753 --rc genhtml_legend=1 00:33:31.753 --rc geninfo_all_blocks=1 00:33:31.753 --rc geninfo_unexecuted_blocks=1 00:33:31.753 00:33:31.753 ' 00:33:31.753 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:32.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:33:32.016 14:31:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:40.163 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:40.163 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:40.163 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:40.163 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:40.164 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:40.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:40.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.509 ms 00:33:40.164 00:33:40.164 --- 10.0.0.2 ping statistics --- 00:33:40.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:40.164 rtt min/avg/max/mdev = 0.509/0.509/0.509/0.000 ms 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:40.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:40.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:33:40.164 00:33:40.164 --- 10.0.0.1 ping statistics --- 00:33:40.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:40.164 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:40.164 ************************************ 00:33:40.164 START TEST nvmf_digest_clean 00:33:40.164 ************************************ 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3606017 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3606017 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3606017 ']' 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:40.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:40.164 14:31:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:40.164 [2024-11-25 14:31:44.516157] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:33:40.164 [2024-11-25 14:31:44.516231] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:40.164 [2024-11-25 14:31:44.616823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.164 [2024-11-25 14:31:44.668114] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:40.164 [2024-11-25 14:31:44.668174] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:40.164 [2024-11-25 14:31:44.668183] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:40.164 [2024-11-25 14:31:44.668190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:40.164 [2024-11-25 14:31:44.668197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:40.164 [2024-11-25 14:31:44.668955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:40.426 null0 00:33:40.426 [2024-11-25 14:31:45.467217] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:40.426 [2024-11-25 14:31:45.491516] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3606205 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3606205 /var/tmp/bperf.sock 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3606205 ']' 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:40.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:40.426 14:31:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:40.688 [2024-11-25 14:31:45.551714] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:33:40.688 [2024-11-25 14:31:45.551775] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3606205 ] 00:33:40.688 [2024-11-25 14:31:45.642519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:40.688 [2024-11-25 14:31:45.695098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:41.279 14:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:41.279 14:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:33:41.279 14:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:41.279 14:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:41.279 14:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:41.540 14:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:41.540 14:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:41.917 nvme0n1 00:33:41.917 14:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:41.917 14:31:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:41.917 Running I/O for 2 seconds... 00:33:44.301 18904.00 IOPS, 73.84 MiB/s [2024-11-25T13:31:49.391Z] 20060.50 IOPS, 78.36 MiB/s 00:33:44.301 Latency(us) 00:33:44.301 [2024-11-25T13:31:49.391Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.301 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:44.301 nvme0n1 : 2.00 20090.18 78.48 0.00 0.00 6364.19 2826.24 22937.60 00:33:44.301 [2024-11-25T13:31:49.391Z] =================================================================================================================== 00:33:44.301 [2024-11-25T13:31:49.391Z] Total : 20090.18 78.48 0.00 0.00 6364.19 2826.24 22937.60 00:33:44.301 { 00:33:44.301 "results": [ 00:33:44.301 { 00:33:44.301 "job": "nvme0n1", 00:33:44.301 "core_mask": "0x2", 00:33:44.301 "workload": "randread", 00:33:44.301 "status": "finished", 00:33:44.301 "queue_depth": 128, 00:33:44.301 "io_size": 4096, 00:33:44.301 "runtime": 2.003417, 00:33:44.301 "iops": 20090.17593441605, 00:33:44.302 "mibps": 78.47724974381269, 00:33:44.302 "io_failed": 0, 00:33:44.302 "io_timeout": 0, 00:33:44.302 "avg_latency_us": 6364.193370932611, 00:33:44.302 "min_latency_us": 2826.24, 00:33:44.302 "max_latency_us": 22937.6 00:33:44.302 } 00:33:44.302 ], 00:33:44.302 "core_count": 1 00:33:44.302 } 00:33:44.302 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:44.302 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:44.302 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:44.302 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:44.302 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:44.302 | select(.opcode=="crc32c") 00:33:44.302 | "\(.module_name) \(.executed)"' 00:33:44.302 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:44.302 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:44.302 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:44.303 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:44.303 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3606205 00:33:44.303 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3606205 ']' 00:33:44.303 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3606205 00:33:44.303 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:33:44.303 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:44.303 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3606205 00:33:44.303 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:44.303 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:44.303 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3606205' 00:33:44.303 killing process with pid 3606205 00:33:44.303 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3606205 00:33:44.303 Received shutdown signal, test time was about 2.000000 seconds 00:33:44.303 00:33:44.303 Latency(us) 00:33:44.303 [2024-11-25T13:31:49.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.303 [2024-11-25T13:31:49.393Z] =================================================================================================================== 00:33:44.303 [2024-11-25T13:31:49.393Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:44.303 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3606205 00:33:44.303 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:44.303 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:44.303 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:44.303 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:44.304 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:44.304 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:44.304 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:44.304 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3606894 00:33:44.304 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3606894 /var/tmp/bperf.sock 00:33:44.304 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3606894 ']' 00:33:44.304 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:44.304 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:44.304 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:44.304 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:44.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:44.306 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:44.306 14:31:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:44.568 [2024-11-25 14:31:49.400761] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:33:44.568 [2024-11-25 14:31:49.400815] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3606894 ] 00:33:44.568 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:44.568 Zero copy mechanism will not be used. 00:33:44.568 [2024-11-25 14:31:49.484304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.568 [2024-11-25 14:31:49.513147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:45.140 14:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:45.140 14:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:33:45.140 14:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:45.140 14:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:45.140 14:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:45.401 14:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:45.401 14:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:45.661 nvme0n1 00:33:45.921 14:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:45.921 14:31:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:45.921 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:45.921 Zero copy mechanism will not be used. 00:33:45.921 Running I/O for 2 seconds... 00:33:47.804 3962.00 IOPS, 495.25 MiB/s [2024-11-25T13:31:52.894Z] 4069.50 IOPS, 508.69 MiB/s 00:33:47.804 Latency(us) 00:33:47.804 [2024-11-25T13:31:52.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.804 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:47.804 nvme0n1 : 2.00 4072.14 509.02 0.00 0.00 3926.42 532.48 9939.63 00:33:47.804 [2024-11-25T13:31:52.894Z] =================================================================================================================== 00:33:47.804 [2024-11-25T13:31:52.894Z] Total : 4072.14 509.02 0.00 0.00 3926.42 532.48 9939.63 00:33:47.804 { 00:33:47.804 "results": [ 00:33:47.804 { 00:33:47.804 "job": "nvme0n1", 00:33:47.804 "core_mask": "0x2", 00:33:47.804 "workload": "randread", 00:33:47.804 "status": "finished", 00:33:47.804 "queue_depth": 16, 00:33:47.804 "io_size": 131072, 00:33:47.804 "runtime": 2.002631, 00:33:47.804 "iops": 4072.143095757531, 00:33:47.804 "mibps": 509.01788696969135, 00:33:47.804 "io_failed": 0, 00:33:47.804 "io_timeout": 0, 00:33:47.804 "avg_latency_us": 3926.4205354588184, 00:33:47.804 "min_latency_us": 532.48, 00:33:47.804 "max_latency_us": 9939.626666666667 00:33:47.804 } 00:33:47.804 ], 00:33:47.804 "core_count": 1 00:33:47.804 } 00:33:47.804 14:31:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:47.804 14:31:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:47.804 14:31:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:47.804 14:31:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:47.804 | select(.opcode=="crc32c") 00:33:47.804 | "\(.module_name) \(.executed)"' 00:33:47.804 14:31:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:48.065 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:48.065 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:48.065 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:48.065 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:48.065 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3606894 00:33:48.065 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3606894 ']' 00:33:48.065 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3606894 00:33:48.065 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:33:48.065 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:48.065 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3606894 00:33:48.065 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:48.065 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:48.065 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3606894' 00:33:48.065 killing process with pid 3606894 00:33:48.065 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3606894 00:33:48.065 Received shutdown signal, test time was about 2.000000 seconds 00:33:48.065 00:33:48.065 Latency(us) 00:33:48.065 [2024-11-25T13:31:53.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:48.065 [2024-11-25T13:31:53.155Z] =================================================================================================================== 00:33:48.065 [2024-11-25T13:31:53.155Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:48.065 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3606894 00:33:48.326 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:48.326 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:48.326 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:48.326 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:48.326 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:48.326 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:48.326 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:48.326 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3607676 00:33:48.326 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3607676 /var/tmp/bperf.sock 00:33:48.326 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3607676 ']' 00:33:48.326 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:48.326 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:48.326 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:48.326 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:48.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:48.326 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:48.326 14:31:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:48.326 [2024-11-25 14:31:53.298754] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:33:48.326 [2024-11-25 14:31:53.298811] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3607676 ] 00:33:48.326 [2024-11-25 14:31:53.379644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:48.326 [2024-11-25 14:31:53.409155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:49.266 14:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:49.266 14:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:33:49.266 14:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:49.266 14:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:49.266 14:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:49.266 14:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:49.266 14:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:49.527 nvme0n1 00:33:49.527 14:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:49.527 14:31:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:49.787 Running I/O for 2 seconds... 00:33:51.669 30177.00 IOPS, 117.88 MiB/s [2024-11-25T13:31:56.759Z] 30384.50 IOPS, 118.69 MiB/s 00:33:51.669 Latency(us) 00:33:51.669 [2024-11-25T13:31:56.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:51.669 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:51.669 nvme0n1 : 2.01 30371.84 118.64 0.00 0.00 4208.54 1679.36 15837.87 00:33:51.669 [2024-11-25T13:31:56.759Z] =================================================================================================================== 00:33:51.669 [2024-11-25T13:31:56.759Z] Total : 30371.84 118.64 0.00 0.00 4208.54 1679.36 15837.87 00:33:51.669 { 00:33:51.669 "results": [ 00:33:51.669 { 00:33:51.669 "job": "nvme0n1", 00:33:51.669 "core_mask": "0x2", 00:33:51.669 "workload": "randwrite", 00:33:51.669 "status": "finished", 00:33:51.669 "queue_depth": 128, 00:33:51.669 "io_size": 4096, 00:33:51.669 "runtime": 2.005048, 00:33:51.669 "iops": 30371.841472124357, 00:33:51.669 "mibps": 118.64000575048577, 00:33:51.669 "io_failed": 0, 00:33:51.669 "io_timeout": 0, 00:33:51.669 "avg_latency_us": 4208.540845909212, 00:33:51.669 "min_latency_us": 1679.36, 00:33:51.669 "max_latency_us": 15837.866666666667 00:33:51.669 } 00:33:51.669 ], 00:33:51.669 "core_count": 1 00:33:51.669 } 00:33:51.669 14:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:51.669 14:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:51.669 14:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:51.669 14:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:51.669 | select(.opcode=="crc32c") 00:33:51.669 | "\(.module_name) \(.executed)"' 00:33:51.669 14:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:51.932 14:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:51.932 14:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:51.932 14:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:51.932 14:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:51.932 14:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3607676 00:33:51.932 14:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3607676 ']' 00:33:51.932 14:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3607676 00:33:51.932 14:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:33:51.932 14:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:51.932 14:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3607676 00:33:51.932 14:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:51.932 14:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:51.932 14:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3607676' 00:33:51.932 killing process with pid 3607676 00:33:51.932 14:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3607676 00:33:51.932 Received shutdown signal, test time was about 2.000000 seconds 00:33:51.932 00:33:51.932 Latency(us) 00:33:51.932 [2024-11-25T13:31:57.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:51.932 [2024-11-25T13:31:57.022Z] =================================================================================================================== 00:33:51.932 [2024-11-25T13:31:57.022Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:51.932 14:31:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3607676 00:33:51.932 14:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:51.932 14:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:51.932 14:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:51.932 14:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:51.932 14:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:51.932 14:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:51.932 14:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:51.932 14:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3608469 00:33:51.932 14:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3608469 /var/tmp/bperf.sock 00:33:51.932 14:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3608469 ']' 00:33:51.932 14:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:51.932 14:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:51.932 14:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:51.932 14:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:51.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:51.932 14:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:51.932 14:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:52.194 [2024-11-25 14:31:57.064863] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:33:52.194 [2024-11-25 14:31:57.064922] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3608469 ] 00:33:52.194 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:52.194 Zero copy mechanism will not be used. 00:33:52.194 [2024-11-25 14:31:57.149422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:52.194 [2024-11-25 14:31:57.179140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:52.765 14:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:52.765 14:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:33:52.765 14:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:53.026 14:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:53.026 14:31:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:53.026 14:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:53.026 14:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:53.598 nvme0n1 00:33:53.598 14:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:53.598 14:31:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:53.598 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:53.599 Zero copy mechanism will not be used. 00:33:53.599 Running I/O for 2 seconds... 00:33:55.488 4077.00 IOPS, 509.62 MiB/s [2024-11-25T13:32:00.578Z] 5852.50 IOPS, 731.56 MiB/s 00:33:55.488 Latency(us) 00:33:55.488 [2024-11-25T13:32:00.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.488 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:55.488 nvme0n1 : 2.00 5848.50 731.06 0.00 0.00 2730.79 1112.75 11796.48 00:33:55.488 [2024-11-25T13:32:00.578Z] =================================================================================================================== 00:33:55.488 [2024-11-25T13:32:00.578Z] Total : 5848.50 731.06 0.00 0.00 2730.79 1112.75 11796.48 00:33:55.488 { 00:33:55.488 "results": [ 00:33:55.488 { 00:33:55.488 "job": "nvme0n1", 00:33:55.488 "core_mask": "0x2", 00:33:55.488 "workload": "randwrite", 00:33:55.488 "status": "finished", 00:33:55.488 "queue_depth": 16, 00:33:55.488 "io_size": 131072, 00:33:55.488 "runtime": 2.004789, 00:33:55.488 "iops": 5848.4957768623035, 00:33:55.488 "mibps": 731.0619721077879, 00:33:55.488 "io_failed": 0, 00:33:55.488 "io_timeout": 0, 00:33:55.488 "avg_latency_us": 2730.79009978678, 00:33:55.488 "min_latency_us": 1112.7466666666667, 00:33:55.488 "max_latency_us": 11796.48 00:33:55.488 } 00:33:55.488 ], 00:33:55.488 "core_count": 1 00:33:55.488 } 00:33:55.488 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:55.488 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:55.488 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:55.488 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:55.488 | select(.opcode=="crc32c") 00:33:55.488 | "\(.module_name) \(.executed)"' 00:33:55.488 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:55.751 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:55.751 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:55.751 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:55.751 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:55.751 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3608469 00:33:55.751 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3608469 ']' 00:33:55.751 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3608469 00:33:55.751 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:33:55.751 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:55.751 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3608469 00:33:55.751 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:55.751 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:55.751 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3608469' 00:33:55.751 killing process with pid 3608469 00:33:55.751 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3608469 00:33:55.751 Received shutdown signal, test time was about 2.000000 seconds 00:33:55.751 00:33:55.751 Latency(us) 00:33:55.751 [2024-11-25T13:32:00.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.751 [2024-11-25T13:32:00.841Z] =================================================================================================================== 00:33:55.751 [2024-11-25T13:32:00.841Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:55.751 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3608469 00:33:56.013 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3606017 00:33:56.013 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3606017 ']' 00:33:56.013 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3606017 00:33:56.013 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:33:56.013 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:56.013 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3606017 00:33:56.013 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:56.013 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:56.013 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3606017' 00:33:56.013 killing process with pid 3606017 00:33:56.013 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3606017 00:33:56.013 14:32:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3606017 00:33:56.013 00:33:56.013 real 0m16.593s 00:33:56.013 user 0m32.764s 00:33:56.013 sys 0m3.784s 00:33:56.013 14:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:56.013 14:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:56.013 ************************************ 00:33:56.013 END TEST nvmf_digest_clean 00:33:56.013 ************************************ 00:33:56.013 14:32:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:56.013 14:32:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:56.013 14:32:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:56.013 14:32:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:56.273 ************************************ 00:33:56.273 START TEST nvmf_digest_error 00:33:56.273 ************************************ 00:33:56.273 14:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:33:56.273 14:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:56.273 14:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:56.273 14:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:56.273 14:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:56.273 14:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3609290 00:33:56.273 14:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3609290 00:33:56.273 14:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:56.273 14:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3609290 ']' 00:33:56.273 14:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:56.273 14:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:56.273 14:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:56.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:56.273 14:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:56.273 14:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:56.274 [2024-11-25 14:32:01.198284] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:33:56.274 [2024-11-25 14:32:01.198337] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:56.274 [2024-11-25 14:32:01.291171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:56.274 [2024-11-25 14:32:01.321711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:56.274 [2024-11-25 14:32:01.321739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:56.274 [2024-11-25 14:32:01.321744] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:56.274 [2024-11-25 14:32:01.321749] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:56.274 [2024-11-25 14:32:01.321753] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:56.274 [2024-11-25 14:32:01.322254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:57.218 14:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:57.218 14:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:33:57.218 14:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:57.218 14:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:57.218 14:32:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.218 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:57.218 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:57.218 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.218 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.218 [2024-11-25 14:32:02.020167] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:57.218 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.218 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:57.218 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:57.218 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.218 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.218 null0 00:33:57.218 [2024-11-25 14:32:02.098082] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:57.218 [2024-11-25 14:32:02.122293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:57.218 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.218 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:57.218 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:57.218 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:57.218 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:57.218 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:57.218 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3609439 00:33:57.218 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3609439 /var/tmp/bperf.sock 00:33:57.218 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3609439 ']' 00:33:57.218 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:57.218 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:57.218 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:57.218 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:57.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:57.218 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:57.218 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.218 [2024-11-25 14:32:02.176925] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:33:57.218 [2024-11-25 14:32:02.176973] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3609439 ] 00:33:57.218 [2024-11-25 14:32:02.261306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.218 [2024-11-25 14:32:02.291080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:58.160 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:58.160 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:33:58.160 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:58.160 14:32:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:58.160 14:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:58.160 14:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.160 14:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:58.160 14:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.160 14:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:58.161 14:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:58.421 nvme0n1 00:33:58.421 14:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:58.421 14:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.421 14:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:58.682 14:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.682 14:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:58.682 14:32:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:58.682 Running I/O for 2 seconds... 00:33:58.682 [2024-11-25 14:32:03.616411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.682 [2024-11-25 14:32:03.616442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.682 [2024-11-25 14:32:03.616451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.682 [2024-11-25 14:32:03.626820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.682 [2024-11-25 14:32:03.626839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.682 [2024-11-25 14:32:03.626846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.682 [2024-11-25 14:32:03.637051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.682 [2024-11-25 14:32:03.637069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.682 [2024-11-25 14:32:03.637076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.682 [2024-11-25 14:32:03.647269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.682 [2024-11-25 14:32:03.647288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.682 [2024-11-25 14:32:03.647294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.682 [2024-11-25 14:32:03.656068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.682 [2024-11-25 14:32:03.656085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.682 [2024-11-25 14:32:03.656092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.682 [2024-11-25 14:32:03.664517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.682 [2024-11-25 14:32:03.664534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.682 [2024-11-25 14:32:03.664541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.682 [2024-11-25 14:32:03.673387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.682 [2024-11-25 14:32:03.673404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.682 [2024-11-25 14:32:03.673410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.682 [2024-11-25 14:32:03.683614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.682 [2024-11-25 14:32:03.683631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.682 [2024-11-25 14:32:03.683638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.682 [2024-11-25 14:32:03.692256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.682 [2024-11-25 14:32:03.692274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.682 [2024-11-25 14:32:03.692284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.682 [2024-11-25 14:32:03.700619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.682 [2024-11-25 14:32:03.700635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.682 [2024-11-25 14:32:03.700641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.682 [2024-11-25 14:32:03.710891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.682 [2024-11-25 14:32:03.710908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.682 [2024-11-25 14:32:03.710915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.682 [2024-11-25 14:32:03.719494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.682 [2024-11-25 14:32:03.719511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.682 [2024-11-25 14:32:03.719518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.682 [2024-11-25 14:32:03.728674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.682 [2024-11-25 14:32:03.728691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.682 [2024-11-25 14:32:03.728698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.682 [2024-11-25 14:32:03.737105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.682 [2024-11-25 14:32:03.737123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.682 [2024-11-25 14:32:03.737129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.682 [2024-11-25 14:32:03.746063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.682 [2024-11-25 14:32:03.746080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.682 [2024-11-25 14:32:03.746087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.682 [2024-11-25 14:32:03.755318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.682 [2024-11-25 14:32:03.755336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.682 [2024-11-25 14:32:03.755342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.682 [2024-11-25 14:32:03.763447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.682 [2024-11-25 14:32:03.763464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.682 [2024-11-25 14:32:03.763470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.945 [2024-11-25 14:32:03.772541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.945 [2024-11-25 14:32:03.772564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.945 [2024-11-25 14:32:03.772570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.945 [2024-11-25 14:32:03.781661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.945 [2024-11-25 14:32:03.781678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.945 [2024-11-25 14:32:03.781684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.945 [2024-11-25 14:32:03.791132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.945 [2024-11-25 14:32:03.791149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.945 [2024-11-25 14:32:03.791155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.945 [2024-11-25 14:32:03.798964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.945 [2024-11-25 14:32:03.798980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.945 [2024-11-25 14:32:03.798987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.945 [2024-11-25 14:32:03.807616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.945 [2024-11-25 14:32:03.807633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.945 [2024-11-25 14:32:03.807640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.945 [2024-11-25 14:32:03.817235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.945 [2024-11-25 14:32:03.817252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.945 [2024-11-25 14:32:03.817258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.945 [2024-11-25 14:32:03.826249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.945 [2024-11-25 14:32:03.826266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.945 [2024-11-25 14:32:03.826272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.945 [2024-11-25 14:32:03.834373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.945 [2024-11-25 14:32:03.834390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.945 [2024-11-25 14:32:03.834397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.945 [2024-11-25 14:32:03.843658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.945 [2024-11-25 14:32:03.843675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.945 [2024-11-25 14:32:03.843682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.945 [2024-11-25 14:32:03.852443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.945 [2024-11-25 14:32:03.852461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.945 [2024-11-25 14:32:03.852467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.945 [2024-11-25 14:32:03.861322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.945 [2024-11-25 14:32:03.861339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.945 [2024-11-25 14:32:03.861346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.945 [2024-11-25 14:32:03.870102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.945 [2024-11-25 14:32:03.870119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.945 [2024-11-25 14:32:03.870126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.945 [2024-11-25 14:32:03.878876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.945 [2024-11-25 14:32:03.878893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.945 [2024-11-25 14:32:03.878899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.945 [2024-11-25 14:32:03.888029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.945 [2024-11-25 14:32:03.888047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.945 [2024-11-25 14:32:03.888053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.945 [2024-11-25 14:32:03.897339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.945 [2024-11-25 14:32:03.897356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.945 [2024-11-25 14:32:03.897362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.945 [2024-11-25 14:32:03.906549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.945 [2024-11-25 14:32:03.906567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.945 [2024-11-25 14:32:03.906573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.946 [2024-11-25 14:32:03.914899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.946 [2024-11-25 14:32:03.914917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.946 [2024-11-25 14:32:03.914923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.946 [2024-11-25 14:32:03.923974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.946 [2024-11-25 14:32:03.923991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.946 [2024-11-25 14:32:03.924001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.946 [2024-11-25 14:32:03.933220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.946 [2024-11-25 14:32:03.933238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.946 [2024-11-25 14:32:03.933244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.946 [2024-11-25 14:32:03.942378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.946 [2024-11-25 14:32:03.942394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.946 [2024-11-25 14:32:03.942400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.946 [2024-11-25 14:32:03.951555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.946 [2024-11-25 14:32:03.951572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.946 [2024-11-25 14:32:03.951578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.946 [2024-11-25 14:32:03.959575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.946 [2024-11-25 14:32:03.959592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.946 [2024-11-25 14:32:03.959598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.946 [2024-11-25 14:32:03.967969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.946 [2024-11-25 14:32:03.967986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.946 [2024-11-25 14:32:03.967992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.946 [2024-11-25 14:32:03.977010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.946 [2024-11-25 14:32:03.977027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.946 [2024-11-25 14:32:03.977034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.946 [2024-11-25 14:32:03.986496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.946 [2024-11-25 14:32:03.986513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.946 [2024-11-25 14:32:03.986519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.946 [2024-11-25 14:32:03.996200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.946 [2024-11-25 14:32:03.996218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.946 [2024-11-25 14:32:03.996224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.946 [2024-11-25 14:32:04.003969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.946 [2024-11-25 14:32:04.003989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.946 [2024-11-25 14:32:04.003995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.946 [2024-11-25 14:32:04.014566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.946 [2024-11-25 14:32:04.014583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.946 [2024-11-25 14:32:04.014590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.946 [2024-11-25 14:32:04.023176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.946 [2024-11-25 14:32:04.023193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.946 [2024-11-25 14:32:04.023199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.946 [2024-11-25 14:32:04.032493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:58.946 [2024-11-25 14:32:04.032510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.946 [2024-11-25 14:32:04.032516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.041152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.208 [2024-11-25 14:32:04.041174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.208 [2024-11-25 14:32:04.041181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.049499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.208 [2024-11-25 14:32:04.049516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.208 [2024-11-25 14:32:04.049523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.058880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.208 [2024-11-25 14:32:04.058897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.208 [2024-11-25 14:32:04.058904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.068019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.208 [2024-11-25 14:32:04.068036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.208 [2024-11-25 14:32:04.068042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.075822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.208 [2024-11-25 14:32:04.075839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.208 [2024-11-25 14:32:04.075846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.086651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.208 [2024-11-25 14:32:04.086669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.208 [2024-11-25 14:32:04.086675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.095807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.208 [2024-11-25 14:32:04.095825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.208 [2024-11-25 14:32:04.095831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.104556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.208 [2024-11-25 14:32:04.104574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.208 [2024-11-25 14:32:04.104581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.112871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.208 [2024-11-25 14:32:04.112887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.208 [2024-11-25 14:32:04.112894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.121197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.208 [2024-11-25 14:32:04.121214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.208 [2024-11-25 14:32:04.121221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.131091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.208 [2024-11-25 14:32:04.131108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.208 [2024-11-25 14:32:04.131114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.141048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.208 [2024-11-25 14:32:04.141065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.208 [2024-11-25 14:32:04.141072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.150625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.208 [2024-11-25 14:32:04.150642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.208 [2024-11-25 14:32:04.150648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.159021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.208 [2024-11-25 14:32:04.159039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.208 [2024-11-25 14:32:04.159049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.167430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.208 [2024-11-25 14:32:04.167447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.208 [2024-11-25 14:32:04.167454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.176308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.208 [2024-11-25 14:32:04.176325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.208 [2024-11-25 14:32:04.176332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.184865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.208 [2024-11-25 14:32:04.184882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.208 [2024-11-25 14:32:04.184888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.193736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.208 [2024-11-25 14:32:04.193753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.208 [2024-11-25 14:32:04.193759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.202433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.208 [2024-11-25 14:32:04.202450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.208 [2024-11-25 14:32:04.202457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.211257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.208 [2024-11-25 14:32:04.211275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.208 [2024-11-25 14:32:04.211281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.220711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.208 [2024-11-25 14:32:04.220728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.208 [2024-11-25 14:32:04.220734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.230071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.208 [2024-11-25 14:32:04.230089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.208 [2024-11-25 14:32:04.230095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.238704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.208 [2024-11-25 14:32:04.238725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.208 [2024-11-25 14:32:04.238731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.246866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.208 [2024-11-25 14:32:04.246883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.208 [2024-11-25 14:32:04.246889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.255300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.208 [2024-11-25 14:32:04.255318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.208 [2024-11-25 14:32:04.255325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.264816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.208 [2024-11-25 14:32:04.264833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.208 [2024-11-25 14:32:04.264839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.208 [2024-11-25 14:32:04.273821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.209 [2024-11-25 14:32:04.273839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.209 [2024-11-25 14:32:04.273845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.209 [2024-11-25 14:32:04.282258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.209 [2024-11-25 14:32:04.282275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.209 [2024-11-25 14:32:04.282282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.209 [2024-11-25 14:32:04.291759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.209 [2024-11-25 14:32:04.291776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.209 [2024-11-25 14:32:04.291783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.470 [2024-11-25 14:32:04.300331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.470 [2024-11-25 14:32:04.300349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.470 [2024-11-25 14:32:04.300355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.470 [2024-11-25 14:32:04.310736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.470 [2024-11-25 14:32:04.310753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.470 [2024-11-25 14:32:04.310760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.470 [2024-11-25 14:32:04.319379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.470 [2024-11-25 14:32:04.319396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.470 [2024-11-25 14:32:04.319402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.470 [2024-11-25 14:32:04.328481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.470 [2024-11-25 14:32:04.328498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.470 [2024-11-25 14:32:04.328504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.470 [2024-11-25 14:32:04.337120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.470 [2024-11-25 14:32:04.337137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.470 [2024-11-25 14:32:04.337143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.470 [2024-11-25 14:32:04.345929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.470 [2024-11-25 14:32:04.345946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.470 [2024-11-25 14:32:04.345952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.470 [2024-11-25 14:32:04.354200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.470 [2024-11-25 14:32:04.354217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.470 [2024-11-25 14:32:04.354223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.470 [2024-11-25 14:32:04.363671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.470 [2024-11-25 14:32:04.363688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.470 [2024-11-25 14:32:04.363694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.470 [2024-11-25 14:32:04.372891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.470 [2024-11-25 14:32:04.372907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.470 [2024-11-25 14:32:04.372913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.470 [2024-11-25 14:32:04.381634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.470 [2024-11-25 14:32:04.381651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.470 [2024-11-25 14:32:04.381657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.470 [2024-11-25 14:32:04.389630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.470 [2024-11-25 14:32:04.389645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.470 [2024-11-25 14:32:04.389655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.470 [2024-11-25 14:32:04.398734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.470 [2024-11-25 14:32:04.398750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.470 [2024-11-25 14:32:04.398756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.470 [2024-11-25 14:32:04.408464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.470 [2024-11-25 14:32:04.408481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.470 [2024-11-25 14:32:04.408488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.470 [2024-11-25 14:32:04.415591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.470 [2024-11-25 14:32:04.415607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.470 [2024-11-25 14:32:04.415613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.470 [2024-11-25 14:32:04.425079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.470 [2024-11-25 14:32:04.425096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.470 [2024-11-25 14:32:04.425103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.470 [2024-11-25 14:32:04.435298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.470 [2024-11-25 14:32:04.435315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.470 [2024-11-25 14:32:04.435321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.470 [2024-11-25 14:32:04.442535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.470 [2024-11-25 14:32:04.442552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.470 [2024-11-25 14:32:04.442558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.470 [2024-11-25 14:32:04.453322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.470 [2024-11-25 14:32:04.453338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.470 [2024-11-25 14:32:04.453345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.470 [2024-11-25 14:32:04.461603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.470 [2024-11-25 14:32:04.461620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.470 [2024-11-25 14:32:04.461626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.470 [2024-11-25 14:32:04.470442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.470 [2024-11-25 14:32:04.470458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.470 [2024-11-25 14:32:04.470465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.470 [2024-11-25 14:32:04.479151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.470 [2024-11-25 14:32:04.479172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.470 [2024-11-25 14:32:04.479178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.470 [2024-11-25 14:32:04.487928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.470 [2024-11-25 14:32:04.487944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.470 [2024-11-25 14:32:04.487951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.470 [2024-11-25 14:32:04.496483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.470 [2024-11-25 14:32:04.496500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.470 [2024-11-25 14:32:04.496506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.470 [2024-11-25 14:32:04.506323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.470 [2024-11-25 14:32:04.506341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.470 [2024-11-25 14:32:04.506347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.470 [2024-11-25 14:32:04.514802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.470 [2024-11-25 14:32:04.514819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.470 [2024-11-25 14:32:04.514825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.471 [2024-11-25 14:32:04.523947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.471 [2024-11-25 14:32:04.523965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.471 [2024-11-25 14:32:04.523971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.471 [2024-11-25 14:32:04.532466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.471 [2024-11-25 14:32:04.532483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.471 [2024-11-25 14:32:04.532490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.471 [2024-11-25 14:32:04.541508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.471 [2024-11-25 14:32:04.541525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.471 [2024-11-25 14:32:04.541534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.471 [2024-11-25 14:32:04.550045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.471 [2024-11-25 14:32:04.550061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.471 [2024-11-25 14:32:04.550067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.731 [2024-11-25 14:32:04.559694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.731 [2024-11-25 14:32:04.559711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.731 [2024-11-25 14:32:04.559718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.731 [2024-11-25 14:32:04.568103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.731 [2024-11-25 14:32:04.568120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.731 [2024-11-25 14:32:04.568126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.731 [2024-11-25 14:32:04.576919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.731 [2024-11-25 14:32:04.576935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.731 [2024-11-25 14:32:04.576941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.731 [2024-11-25 14:32:04.586384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.731 [2024-11-25 14:32:04.586401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.731 [2024-11-25 14:32:04.586407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.731 [2024-11-25 14:32:04.595331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.731 [2024-11-25 14:32:04.595347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.731 [2024-11-25 14:32:04.595353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.731 28076.00 IOPS, 109.67 MiB/s [2024-11-25T13:32:04.821Z] [2024-11-25 14:32:04.604471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.731 [2024-11-25 14:32:04.604488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.731 [2024-11-25 14:32:04.604494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.731 [2024-11-25 14:32:04.614278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.731 [2024-11-25 14:32:04.614295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.731 [2024-11-25 14:32:04.614301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.731 [2024-11-25 14:32:04.623431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.731 [2024-11-25 14:32:04.623452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.731 [2024-11-25 14:32:04.623458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.731 [2024-11-25 14:32:04.633028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.731 [2024-11-25 14:32:04.633045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.731 [2024-11-25 14:32:04.633051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.731 [2024-11-25 14:32:04.640846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.731 [2024-11-25 14:32:04.640863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.731 [2024-11-25 14:32:04.640870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.731 [2024-11-25 14:32:04.650410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.731 [2024-11-25 14:32:04.650427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.731 [2024-11-25 14:32:04.650433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.731 [2024-11-25 14:32:04.659931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.731 [2024-11-25 14:32:04.659947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.731 [2024-11-25 14:32:04.659954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.731 [2024-11-25 14:32:04.668389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.731 [2024-11-25 14:32:04.668406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.731 [2024-11-25 14:32:04.668412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.731 [2024-11-25 14:32:04.677426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.731 [2024-11-25 14:32:04.677443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.731 [2024-11-25 14:32:04.677449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.731 [2024-11-25 14:32:04.686570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.731 [2024-11-25 14:32:04.686586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.731 [2024-11-25 14:32:04.686593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.731 [2024-11-25 14:32:04.695206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.731 [2024-11-25 14:32:04.695222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.732 [2024-11-25 14:32:04.695229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.732 [2024-11-25 14:32:04.704308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.732 [2024-11-25 14:32:04.704325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.732 [2024-11-25 14:32:04.704331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.732 [2024-11-25 14:32:04.712337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.732 [2024-11-25 14:32:04.712353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.732 [2024-11-25 14:32:04.712360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.732 [2024-11-25 14:32:04.721618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.732 [2024-11-25 14:32:04.721634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.732 [2024-11-25 14:32:04.721640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.732 [2024-11-25 14:32:04.730257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.732 [2024-11-25 14:32:04.730273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.732 [2024-11-25 14:32:04.730279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.732 [2024-11-25 14:32:04.739093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.732 [2024-11-25 14:32:04.739110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.732 [2024-11-25 14:32:04.739116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.732 [2024-11-25 14:32:04.748109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.732 [2024-11-25 14:32:04.748125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.732 [2024-11-25 14:32:04.748132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.732 [2024-11-25 14:32:04.756660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.732 [2024-11-25 14:32:04.756677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.732 [2024-11-25 14:32:04.756683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.732 [2024-11-25 14:32:04.765683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.732 [2024-11-25 14:32:04.765700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.732 [2024-11-25 14:32:04.765706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.732 [2024-11-25 14:32:04.775823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.732 [2024-11-25 14:32:04.775839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.732 [2024-11-25 14:32:04.775849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.732 [2024-11-25 14:32:04.786200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.732 [2024-11-25 14:32:04.786217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.732 [2024-11-25 14:32:04.786224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.732 [2024-11-25 14:32:04.793528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.732 [2024-11-25 14:32:04.793545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.732 [2024-11-25 14:32:04.793551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.732 [2024-11-25 14:32:04.802500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.732 [2024-11-25 14:32:04.802517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.732 [2024-11-25 14:32:04.802523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.732 [2024-11-25 14:32:04.812028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.732 [2024-11-25 14:32:04.812044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.732 [2024-11-25 14:32:04.812050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.993 [2024-11-25 14:32:04.821715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.993 [2024-11-25 14:32:04.821732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.993 [2024-11-25 14:32:04.821739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.993 [2024-11-25 14:32:04.830315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.993 [2024-11-25 14:32:04.830332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.993 [2024-11-25 14:32:04.830338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.993 [2024-11-25 14:32:04.838815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.993 [2024-11-25 14:32:04.838831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.993 [2024-11-25 14:32:04.838837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.993 [2024-11-25 14:32:04.847279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.993 [2024-11-25 14:32:04.847295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.993 [2024-11-25 14:32:04.847301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.993 [2024-11-25 14:32:04.856156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.993 [2024-11-25 14:32:04.856176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.993 [2024-11-25 14:32:04.856183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.993 [2024-11-25 14:32:04.864996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.993 [2024-11-25 14:32:04.865013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.993 [2024-11-25 14:32:04.865019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.993 [2024-11-25 14:32:04.874269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.993 [2024-11-25 14:32:04.874286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.993 [2024-11-25 14:32:04.874292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.993 [2024-11-25 14:32:04.883036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.993 [2024-11-25 14:32:04.883052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.993 [2024-11-25 14:32:04.883058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.993 [2024-11-25 14:32:04.891361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.993 [2024-11-25 14:32:04.891378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.993 [2024-11-25 14:32:04.891384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.993 [2024-11-25 14:32:04.899952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.993 [2024-11-25 14:32:04.899969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.993 [2024-11-25 14:32:04.899975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.993 [2024-11-25 14:32:04.909483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.993 [2024-11-25 14:32:04.909499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.993 [2024-11-25 14:32:04.909505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.993 [2024-11-25 14:32:04.918037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.993 [2024-11-25 14:32:04.918054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.993 [2024-11-25 14:32:04.918060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.993 [2024-11-25 14:32:04.928993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.993 [2024-11-25 14:32:04.929010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.993 [2024-11-25 14:32:04.929020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.993 [2024-11-25 14:32:04.938541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.993 [2024-11-25 14:32:04.938557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.993 [2024-11-25 14:32:04.938563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.993 [2024-11-25 14:32:04.947800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.993 [2024-11-25 14:32:04.947816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.993 [2024-11-25 14:32:04.947823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.993 [2024-11-25 14:32:04.956668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.993 [2024-11-25 14:32:04.956685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.993 [2024-11-25 14:32:04.956691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.993 [2024-11-25 14:32:04.964674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.993 [2024-11-25 14:32:04.964691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.993 [2024-11-25 14:32:04.964697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.993 [2024-11-25 14:32:04.974420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.993 [2024-11-25 14:32:04.974437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.993 [2024-11-25 14:32:04.974443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.993 [2024-11-25 14:32:04.983561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.993 [2024-11-25 14:32:04.983578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.993 [2024-11-25 14:32:04.983584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.993 [2024-11-25 14:32:04.992548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.993 [2024-11-25 14:32:04.992565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.993 [2024-11-25 14:32:04.992571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.993 [2024-11-25 14:32:05.001115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.993 [2024-11-25 14:32:05.001132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.993 [2024-11-25 14:32:05.001138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.993 [2024-11-25 14:32:05.010713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.993 [2024-11-25 14:32:05.010733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.993 [2024-11-25 14:32:05.010739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.993 [2024-11-25 14:32:05.018904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.993 [2024-11-25 14:32:05.018920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.993 [2024-11-25 14:32:05.018926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.993 [2024-11-25 14:32:05.027215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.993 [2024-11-25 14:32:05.027231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.993 [2024-11-25 14:32:05.027237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.993 [2024-11-25 14:32:05.035394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.993 [2024-11-25 14:32:05.035411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.993 [2024-11-25 14:32:05.035417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.993 [2024-11-25 14:32:05.044464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.993 [2024-11-25 14:32:05.044480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.994 [2024-11-25 14:32:05.044486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.994 [2024-11-25 14:32:05.055644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.994 [2024-11-25 14:32:05.055661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.994 [2024-11-25 14:32:05.055667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.994 [2024-11-25 14:32:05.065242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.994 [2024-11-25 14:32:05.065259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.994 [2024-11-25 14:32:05.065265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:59.994 [2024-11-25 14:32:05.075272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:33:59.994 [2024-11-25 14:32:05.075290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.994 [2024-11-25 14:32:05.075296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.254 [2024-11-25 14:32:05.086714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.254 [2024-11-25 14:32:05.086732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.254 [2024-11-25 14:32:05.086738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.254 [2024-11-25 14:32:05.094559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.254 [2024-11-25 14:32:05.094575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.254 [2024-11-25 14:32:05.094581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.254 [2024-11-25 14:32:05.105925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.254 [2024-11-25 14:32:05.105942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.254 [2024-11-25 14:32:05.105948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.254 [2024-11-25 14:32:05.115675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.254 [2024-11-25 14:32:05.115692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.254 [2024-11-25 14:32:05.115698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.254 [2024-11-25 14:32:05.124494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.254 [2024-11-25 14:32:05.124511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.254 [2024-11-25 14:32:05.124517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.254 [2024-11-25 14:32:05.133022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.254 [2024-11-25 14:32:05.133039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.254 [2024-11-25 14:32:05.133045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.254 [2024-11-25 14:32:05.141572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.254 [2024-11-25 14:32:05.141588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.254 [2024-11-25 14:32:05.141595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.254 [2024-11-25 14:32:05.150960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.254 [2024-11-25 14:32:05.150977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.254 [2024-11-25 14:32:05.150983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.254 [2024-11-25 14:32:05.159355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.254 [2024-11-25 14:32:05.159372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.254 [2024-11-25 14:32:05.159378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.254 [2024-11-25 14:32:05.168818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.254 [2024-11-25 14:32:05.168834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.254 [2024-11-25 14:32:05.168844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.254 [2024-11-25 14:32:05.177317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.254 [2024-11-25 14:32:05.177334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.254 [2024-11-25 14:32:05.177340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.254 [2024-11-25 14:32:05.185695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.254 [2024-11-25 14:32:05.185713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.254 [2024-11-25 14:32:05.185719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.254 [2024-11-25 14:32:05.195462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.254 [2024-11-25 14:32:05.195478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.254 [2024-11-25 14:32:05.195484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.254 [2024-11-25 14:32:05.204431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.254 [2024-11-25 14:32:05.204447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.254 [2024-11-25 14:32:05.204454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.254 [2024-11-25 14:32:05.213254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.254 [2024-11-25 14:32:05.213271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.254 [2024-11-25 14:32:05.213277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.254 [2024-11-25 14:32:05.222446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.254 [2024-11-25 14:32:05.222462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.254 [2024-11-25 14:32:05.222469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.254 [2024-11-25 14:32:05.230798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.254 [2024-11-25 14:32:05.230814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.254 [2024-11-25 14:32:05.230820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.254 [2024-11-25 14:32:05.238581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.254 [2024-11-25 14:32:05.238597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.254 [2024-11-25 14:32:05.238604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.254 [2024-11-25 14:32:05.248185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.254 [2024-11-25 14:32:05.248205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.254 [2024-11-25 14:32:05.248211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.254 [2024-11-25 14:32:05.257988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.254 [2024-11-25 14:32:05.258005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.254 [2024-11-25 14:32:05.258011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.254 [2024-11-25 14:32:05.266890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.254 [2024-11-25 14:32:05.266907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.254 [2024-11-25 14:32:05.266914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.254 [2024-11-25 14:32:05.274444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.254 [2024-11-25 14:32:05.274460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.254 [2024-11-25 14:32:05.274466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.254 [2024-11-25 14:32:05.283830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.254 [2024-11-25 14:32:05.283847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.254 [2024-11-25 14:32:05.283853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.254 [2024-11-25 14:32:05.292784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.254 [2024-11-25 14:32:05.292801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.254 [2024-11-25 14:32:05.292807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.254 [2024-11-25 14:32:05.302099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.255 [2024-11-25 14:32:05.302115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.255 [2024-11-25 14:32:05.302121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.255 [2024-11-25 14:32:05.310016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.255 [2024-11-25 14:32:05.310033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.255 [2024-11-25 14:32:05.310039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.255 [2024-11-25 14:32:05.319341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.255 [2024-11-25 14:32:05.319357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.255 [2024-11-25 14:32:05.319363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.255 [2024-11-25 14:32:05.327627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.255 [2024-11-25 14:32:05.327644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.255 [2024-11-25 14:32:05.327650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.255 [2024-11-25 14:32:05.336101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.255 [2024-11-25 14:32:05.336118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.255 [2024-11-25 14:32:05.336124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.514 [2024-11-25 14:32:05.347417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.514 [2024-11-25 14:32:05.347434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.514 [2024-11-25 14:32:05.347441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.514 [2024-11-25 14:32:05.358228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.514 [2024-11-25 14:32:05.358245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.514 [2024-11-25 14:32:05.358251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.514 [2024-11-25 14:32:05.367598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.514 [2024-11-25 14:32:05.367615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.514 [2024-11-25 14:32:05.367621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.514 [2024-11-25 14:32:05.375517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.514 [2024-11-25 14:32:05.375534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.514 [2024-11-25 14:32:05.375541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.514 [2024-11-25 14:32:05.385503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.514 [2024-11-25 14:32:05.385521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.514 [2024-11-25 14:32:05.385527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.514 [2024-11-25 14:32:05.395055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.514 [2024-11-25 14:32:05.395073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.514 [2024-11-25 14:32:05.395079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.514 [2024-11-25 14:32:05.402076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.514 [2024-11-25 14:32:05.402094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.514 [2024-11-25 14:32:05.402103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.514 [2024-11-25 14:32:05.412247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.515 [2024-11-25 14:32:05.412264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.515 [2024-11-25 14:32:05.412271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.515 [2024-11-25 14:32:05.421035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.515 [2024-11-25 14:32:05.421051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.515 [2024-11-25 14:32:05.421058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.515 [2024-11-25 14:32:05.430435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.515 [2024-11-25 14:32:05.430452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.515 [2024-11-25 14:32:05.430458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.515 [2024-11-25 14:32:05.438578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.515 [2024-11-25 14:32:05.438595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.515 [2024-11-25 14:32:05.438601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.515 [2024-11-25 14:32:05.446851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.515 [2024-11-25 14:32:05.446867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.515 [2024-11-25 14:32:05.446874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.515 [2024-11-25 14:32:05.457045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.515 [2024-11-25 14:32:05.457062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.515 [2024-11-25 14:32:05.457068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.515 [2024-11-25 14:32:05.465489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.515 [2024-11-25 14:32:05.465506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.515 [2024-11-25 14:32:05.465512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.515 [2024-11-25 14:32:05.475424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.515 [2024-11-25 14:32:05.475441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.515 [2024-11-25 14:32:05.475447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.515 [2024-11-25 14:32:05.484534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.515 [2024-11-25 14:32:05.484551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.515 [2024-11-25 14:32:05.484557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.515 [2024-11-25 14:32:05.493048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.515 [2024-11-25 14:32:05.493065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.515 [2024-11-25 14:32:05.493071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.515 [2024-11-25 14:32:05.501708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.515 [2024-11-25 14:32:05.501724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.515 [2024-11-25 14:32:05.501731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.515 [2024-11-25 14:32:05.511048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.515 [2024-11-25 14:32:05.511064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.515 [2024-11-25 14:32:05.511071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.515 [2024-11-25 14:32:05.520968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.515 [2024-11-25 14:32:05.520985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.515 [2024-11-25 14:32:05.520991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.515 [2024-11-25 14:32:05.528566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.515 [2024-11-25 14:32:05.528582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.515 [2024-11-25 14:32:05.528588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.515 [2024-11-25 14:32:05.539456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.515 [2024-11-25 14:32:05.539473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.515 [2024-11-25 14:32:05.539480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.515 [2024-11-25 14:32:05.548811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.515 [2024-11-25 14:32:05.548828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.515 [2024-11-25 14:32:05.548834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.515 [2024-11-25 14:32:05.558503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.515 [2024-11-25 14:32:05.558521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.515 [2024-11-25 14:32:05.558530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.515 [2024-11-25 14:32:05.567267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.515 [2024-11-25 14:32:05.567284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.515 [2024-11-25 14:32:05.567292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.515 [2024-11-25 14:32:05.575739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.515 [2024-11-25 14:32:05.575755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.515 [2024-11-25 14:32:05.575762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.515 [2024-11-25 14:32:05.585409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.515 [2024-11-25 14:32:05.585427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.515 [2024-11-25 14:32:05.585433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.515 [2024-11-25 14:32:05.593723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.515 [2024-11-25 14:32:05.593741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.515 [2024-11-25 14:32:05.593747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.774 28123.00 IOPS, 109.86 MiB/s [2024-11-25T13:32:05.864Z] [2024-11-25 14:32:05.603146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7205f0) 00:34:00.774 [2024-11-25 14:32:05.603167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.774 [2024-11-25 14:32:05.603173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:00.774 00:34:00.774 Latency(us) 00:34:00.774 [2024-11-25T13:32:05.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:00.774 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:34:00.774 nvme0n1 : 2.01 28124.02 109.86 0.00 0.00 4544.35 2266.45 19114.67 00:34:00.774 [2024-11-25T13:32:05.864Z] =================================================================================================================== 00:34:00.774 [2024-11-25T13:32:05.864Z] Total : 28124.02 109.86 0.00 0.00 4544.35 2266.45 19114.67 00:34:00.774 { 00:34:00.774 "results": [ 00:34:00.774 { 00:34:00.774 "job": "nvme0n1", 00:34:00.774 "core_mask": "0x2", 00:34:00.774 "workload": "randread", 00:34:00.774 "status": "finished", 00:34:00.774 "queue_depth": 128, 00:34:00.774 "io_size": 4096, 00:34:00.774 "runtime": 2.005474, 00:34:00.774 "iops": 28124.024544820826, 00:34:00.774 "mibps": 109.85947087820635, 00:34:00.774 "io_failed": 0, 00:34:00.774 "io_timeout": 0, 00:34:00.774 "avg_latency_us": 4544.353273524579, 00:34:00.774 "min_latency_us": 2266.4533333333334, 00:34:00.774 "max_latency_us": 19114.666666666668 00:34:00.774 } 00:34:00.774 ], 00:34:00.774 "core_count": 1 00:34:00.774 } 00:34:00.774 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:00.774 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:00.774 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:00.774 | .driver_specific 00:34:00.774 | .nvme_error 00:34:00.774 | .status_code 00:34:00.774 | .command_transient_transport_error' 00:34:00.774 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:00.774 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 221 > 0 )) 00:34:00.774 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3609439 00:34:00.774 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3609439 ']' 00:34:00.774 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3609439 00:34:00.774 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:34:00.774 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:00.774 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3609439 00:34:01.032 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:01.032 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:01.032 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3609439' 00:34:01.032 killing process with pid 3609439 00:34:01.032 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3609439 00:34:01.032 Received shutdown signal, test time was about 2.000000 seconds 00:34:01.032 00:34:01.032 Latency(us) 00:34:01.032 [2024-11-25T13:32:06.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:01.032 [2024-11-25T13:32:06.122Z] =================================================================================================================== 00:34:01.032 [2024-11-25T13:32:06.122Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:01.032 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3609439 00:34:01.032 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:34:01.032 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:01.032 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:34:01.032 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:01.032 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:01.032 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3610245 00:34:01.032 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3610245 /var/tmp/bperf.sock 00:34:01.032 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3610245 ']' 00:34:01.032 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:34:01.032 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:01.032 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:01.032 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:01.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:01.032 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:01.032 14:32:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:01.032 [2024-11-25 14:32:06.029863] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:34:01.032 [2024-11-25 14:32:06.029927] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3610245 ] 00:34:01.032 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:01.032 Zero copy mechanism will not be used. 00:34:01.032 [2024-11-25 14:32:06.113246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:01.291 [2024-11-25 14:32:06.143379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:01.860 14:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:01.860 14:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:34:01.860 14:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:01.860 14:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:02.120 14:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:02.120 14:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.120 14:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:02.120 14:32:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.120 14:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:02.120 14:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:02.380 nvme0n1 00:34:02.380 14:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:02.380 14:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.380 14:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:02.380 14:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.381 14:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:02.381 14:32:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:02.641 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:02.641 Zero copy mechanism will not be used. 00:34:02.641 Running I/O for 2 seconds... 00:34:02.641 [2024-11-25 14:32:07.508489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.641 [2024-11-25 14:32:07.508522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.641 [2024-11-25 14:32:07.508531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:02.641 [2024-11-25 14:32:07.519737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.641 [2024-11-25 14:32:07.519761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.641 [2024-11-25 14:32:07.519768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:02.641 [2024-11-25 14:32:07.525798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.641 [2024-11-25 14:32:07.525823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.641 [2024-11-25 14:32:07.525829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:02.641 [2024-11-25 14:32:07.530122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.641 [2024-11-25 14:32:07.530140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.641 [2024-11-25 14:32:07.530147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:02.641 [2024-11-25 14:32:07.537704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.641 [2024-11-25 14:32:07.537722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.641 [2024-11-25 14:32:07.537728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:02.641 [2024-11-25 14:32:07.548673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.641 [2024-11-25 14:32:07.548691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.641 [2024-11-25 14:32:07.548698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:02.641 [2024-11-25 14:32:07.555632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.641 [2024-11-25 14:32:07.555650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.641 [2024-11-25 14:32:07.555657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:02.641 [2024-11-25 14:32:07.560055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.641 [2024-11-25 14:32:07.560074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.641 [2024-11-25 14:32:07.560080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:02.641 [2024-11-25 14:32:07.566086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.641 [2024-11-25 14:32:07.566103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.641 [2024-11-25 14:32:07.566110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:02.641 [2024-11-25 14:32:07.573080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.641 [2024-11-25 14:32:07.573098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.641 [2024-11-25 14:32:07.573105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:02.641 [2024-11-25 14:32:07.580821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.641 [2024-11-25 14:32:07.580840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.641 [2024-11-25 14:32:07.580846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:02.641 [2024-11-25 14:32:07.589629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.641 [2024-11-25 14:32:07.589647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.641 [2024-11-25 14:32:07.589653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:02.641 [2024-11-25 14:32:07.598501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.641 [2024-11-25 14:32:07.598520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.641 [2024-11-25 14:32:07.598527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:02.641 [2024-11-25 14:32:07.608544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.641 [2024-11-25 14:32:07.608562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.641 [2024-11-25 14:32:07.608569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:02.641 [2024-11-25 14:32:07.620408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.641 [2024-11-25 14:32:07.620426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.641 [2024-11-25 14:32:07.620433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:02.641 [2024-11-25 14:32:07.632973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.641 [2024-11-25 14:32:07.632992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.641 [2024-11-25 14:32:07.632999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:02.641 [2024-11-25 14:32:07.641927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.641 [2024-11-25 14:32:07.641946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.641 [2024-11-25 14:32:07.641952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:02.641 [2024-11-25 14:32:07.647392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.641 [2024-11-25 14:32:07.647410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.641 [2024-11-25 14:32:07.647417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:02.641 [2024-11-25 14:32:07.658838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.641 [2024-11-25 14:32:07.658858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.641 [2024-11-25 14:32:07.658864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:02.641 [2024-11-25 14:32:07.666039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.641 [2024-11-25 14:32:07.666058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.641 [2024-11-25 14:32:07.666068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:02.641 [2024-11-25 14:32:07.671335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.641 [2024-11-25 14:32:07.671354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.641 [2024-11-25 14:32:07.671360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:02.641 [2024-11-25 14:32:07.682054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.641 [2024-11-25 14:32:07.682073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.641 [2024-11-25 14:32:07.682079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:02.641 [2024-11-25 14:32:07.689744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.641 [2024-11-25 14:32:07.689763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.641 [2024-11-25 14:32:07.689769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:02.641 [2024-11-25 14:32:07.699795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.641 [2024-11-25 14:32:07.699813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.641 [2024-11-25 14:32:07.699819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:02.641 [2024-11-25 14:32:07.705330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.641 [2024-11-25 14:32:07.705348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.641 [2024-11-25 14:32:07.705355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:02.641 [2024-11-25 14:32:07.715374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.641 [2024-11-25 14:32:07.715392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.642 [2024-11-25 14:32:07.715399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:02.642 [2024-11-25 14:32:07.721817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.642 [2024-11-25 14:32:07.721835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.642 [2024-11-25 14:32:07.721841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:02.902 [2024-11-25 14:32:07.729615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.902 [2024-11-25 14:32:07.729634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.902 [2024-11-25 14:32:07.729641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:02.902 [2024-11-25 14:32:07.738125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.902 [2024-11-25 14:32:07.738144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.902 [2024-11-25 14:32:07.738150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:02.902 [2024-11-25 14:32:07.742672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.902 [2024-11-25 14:32:07.742691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.902 [2024-11-25 14:32:07.742697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:02.902 [2024-11-25 14:32:07.747264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.902 [2024-11-25 14:32:07.747282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.902 [2024-11-25 14:32:07.747289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:02.902 [2024-11-25 14:32:07.754580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.902 [2024-11-25 14:32:07.754598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.902 [2024-11-25 14:32:07.754605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:02.902 [2024-11-25 14:32:07.765083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.902 [2024-11-25 14:32:07.765102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.902 [2024-11-25 14:32:07.765108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:02.902 [2024-11-25 14:32:07.776441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.902 [2024-11-25 14:32:07.776460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.902 [2024-11-25 14:32:07.776466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:02.902 [2024-11-25 14:32:07.787993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.902 [2024-11-25 14:32:07.788012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.902 [2024-11-25 14:32:07.788018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:02.902 [2024-11-25 14:32:07.799177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.902 [2024-11-25 14:32:07.799195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.799201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.804383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.804402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.804412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.809746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.809764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.809771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.814202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.814220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.814226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.819153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.819177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.819183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.827034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.827052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.827058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.834766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.834785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.834791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.846349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.846368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.846374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.857828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.857846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.857852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.863058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.863077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.863083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.870062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.870084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.870090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.875985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.876003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.876009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.880247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.880265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.880272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.886889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.886907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.886913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.891208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.891227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.891233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.897027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.897046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.897052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.903684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.903703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.903709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.909808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.909827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.909833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.914225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.914243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.914249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.919691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.919709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.919715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.924004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.924023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.924029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.928327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.928346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.928352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.936192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.936211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.936217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.942483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.942501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.942508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.952746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.952763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.952770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.961385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.961403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.961409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.971550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.971568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.903 [2024-11-25 14:32:07.971574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:02.903 [2024-11-25 14:32:07.980234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.903 [2024-11-25 14:32:07.980252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.904 [2024-11-25 14:32:07.980261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:02.904 [2024-11-25 14:32:07.984323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:02.904 [2024-11-25 14:32:07.984342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.904 [2024-11-25 14:32:07.984348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.164 [2024-11-25 14:32:07.992479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.164 [2024-11-25 14:32:07.992498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.164 [2024-11-25 14:32:07.992505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.164 [2024-11-25 14:32:08.003642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.164 [2024-11-25 14:32:08.003661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.164 [2024-11-25 14:32:08.003667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:03.164 [2024-11-25 14:32:08.015962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.164 [2024-11-25 14:32:08.015980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.164 [2024-11-25 14:32:08.015987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.164 [2024-11-25 14:32:08.027931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.164 [2024-11-25 14:32:08.027950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.164 [2024-11-25 14:32:08.027956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.164 [2024-11-25 14:32:08.039880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.164 [2024-11-25 14:32:08.039899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.164 [2024-11-25 14:32:08.039905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.164 [2024-11-25 14:32:08.051765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.164 [2024-11-25 14:32:08.051783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.164 [2024-11-25 14:32:08.051789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:03.164 [2024-11-25 14:32:08.063771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.164 [2024-11-25 14:32:08.063789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.164 [2024-11-25 14:32:08.063796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.164 [2024-11-25 14:32:08.074931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.164 [2024-11-25 14:32:08.074953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.164 [2024-11-25 14:32:08.074959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.165 [2024-11-25 14:32:08.086141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.165 [2024-11-25 14:32:08.086167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.165 [2024-11-25 14:32:08.086173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.165 [2024-11-25 14:32:08.098584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.165 [2024-11-25 14:32:08.098603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.165 [2024-11-25 14:32:08.098609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:03.165 [2024-11-25 14:32:08.110523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.165 [2024-11-25 14:32:08.110541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.165 [2024-11-25 14:32:08.110548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.165 [2024-11-25 14:32:08.123378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.165 [2024-11-25 14:32:08.123397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.165 [2024-11-25 14:32:08.123403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.165 [2024-11-25 14:32:08.135820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.165 [2024-11-25 14:32:08.135839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.165 [2024-11-25 14:32:08.135845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.165 [2024-11-25 14:32:08.147984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.165 [2024-11-25 14:32:08.148003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.165 [2024-11-25 14:32:08.148009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:03.165 [2024-11-25 14:32:08.160281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.165 [2024-11-25 14:32:08.160299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.165 [2024-11-25 14:32:08.160305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.165 [2024-11-25 14:32:08.173038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.165 [2024-11-25 14:32:08.173056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.165 [2024-11-25 14:32:08.173063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.165 [2024-11-25 14:32:08.185356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.165 [2024-11-25 14:32:08.185374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.165 [2024-11-25 14:32:08.185381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.165 [2024-11-25 14:32:08.197510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.165 [2024-11-25 14:32:08.197529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.165 [2024-11-25 14:32:08.197535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:03.165 [2024-11-25 14:32:08.209432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.165 [2024-11-25 14:32:08.209450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.165 [2024-11-25 14:32:08.209457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.165 [2024-11-25 14:32:08.220190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.165 [2024-11-25 14:32:08.220208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.165 [2024-11-25 14:32:08.220215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.165 [2024-11-25 14:32:08.230324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.165 [2024-11-25 14:32:08.230342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.165 [2024-11-25 14:32:08.230348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.165 [2024-11-25 14:32:08.240743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.165 [2024-11-25 14:32:08.240761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.165 [2024-11-25 14:32:08.240768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:03.165 [2024-11-25 14:32:08.251038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.165 [2024-11-25 14:32:08.251056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.165 [2024-11-25 14:32:08.251062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.426 [2024-11-25 14:32:08.261000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.426 [2024-11-25 14:32:08.261018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.426 [2024-11-25 14:32:08.261024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.426 [2024-11-25 14:32:08.271317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.426 [2024-11-25 14:32:08.271338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.426 [2024-11-25 14:32:08.271344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.426 [2024-11-25 14:32:08.282247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.426 [2024-11-25 14:32:08.282265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.426 [2024-11-25 14:32:08.282271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:03.426 [2024-11-25 14:32:08.293518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.426 [2024-11-25 14:32:08.293536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.426 [2024-11-25 14:32:08.293543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.426 [2024-11-25 14:32:08.305006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.426 [2024-11-25 14:32:08.305024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.426 [2024-11-25 14:32:08.305031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.426 [2024-11-25 14:32:08.316385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.426 [2024-11-25 14:32:08.316403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.426 [2024-11-25 14:32:08.316410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.426 [2024-11-25 14:32:08.329502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.426 [2024-11-25 14:32:08.329519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.426 [2024-11-25 14:32:08.329525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:03.426 [2024-11-25 14:32:08.341701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.426 [2024-11-25 14:32:08.341719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.426 [2024-11-25 14:32:08.341725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.426 [2024-11-25 14:32:08.352436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.426 [2024-11-25 14:32:08.352454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.426 [2024-11-25 14:32:08.352461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.426 [2024-11-25 14:32:08.360489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.426 [2024-11-25 14:32:08.360508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.426 [2024-11-25 14:32:08.360514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.426 [2024-11-25 14:32:08.372448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.426 [2024-11-25 14:32:08.372466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.426 [2024-11-25 14:32:08.372472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:03.426 [2024-11-25 14:32:08.378295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.426 [2024-11-25 14:32:08.378312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.426 [2024-11-25 14:32:08.378319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.426 [2024-11-25 14:32:08.388070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.426 [2024-11-25 14:32:08.388088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.426 [2024-11-25 14:32:08.388095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.427 [2024-11-25 14:32:08.398841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.427 [2024-11-25 14:32:08.398859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.427 [2024-11-25 14:32:08.398866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.427 [2024-11-25 14:32:08.410152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.427 [2024-11-25 14:32:08.410176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.427 [2024-11-25 14:32:08.410182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:03.427 [2024-11-25 14:32:08.420235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.427 [2024-11-25 14:32:08.420252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.427 [2024-11-25 14:32:08.420258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.427 [2024-11-25 14:32:08.430557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.427 [2024-11-25 14:32:08.430575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.427 [2024-11-25 14:32:08.430581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.427 [2024-11-25 14:32:08.439938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.427 [2024-11-25 14:32:08.439957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.427 [2024-11-25 14:32:08.439963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.427 [2024-11-25 14:32:08.445674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.427 [2024-11-25 14:32:08.445692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.427 [2024-11-25 14:32:08.445701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:03.427 [2024-11-25 14:32:08.454753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.427 [2024-11-25 14:32:08.454770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.427 [2024-11-25 14:32:08.454776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.427 [2024-11-25 14:32:08.466219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.427 [2024-11-25 14:32:08.466237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.427 [2024-11-25 14:32:08.466243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.427 [2024-11-25 14:32:08.477212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.427 [2024-11-25 14:32:08.477230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.427 [2024-11-25 14:32:08.477236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.427 [2024-11-25 14:32:08.487881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.427 [2024-11-25 14:32:08.487898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.427 [2024-11-25 14:32:08.487904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:03.427 [2024-11-25 14:32:08.497265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.427 [2024-11-25 14:32:08.497282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.427 [2024-11-25 14:32:08.497289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.427 3412.00 IOPS, 426.50 MiB/s [2024-11-25T13:32:08.517Z] [2024-11-25 14:32:08.508422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.427 [2024-11-25 14:32:08.508440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.427 [2024-11-25 14:32:08.508446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.687 [2024-11-25 14:32:08.520461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.687 [2024-11-25 14:32:08.520479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.687 [2024-11-25 14:32:08.520486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.687 [2024-11-25 14:32:08.532656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.688 [2024-11-25 14:32:08.532674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.688 [2024-11-25 14:32:08.532680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:03.688 [2024-11-25 14:32:08.545246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.688 [2024-11-25 14:32:08.545266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.688 [2024-11-25 14:32:08.545273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.688 [2024-11-25 14:32:08.558398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.688 [2024-11-25 14:32:08.558416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.688 [2024-11-25 14:32:08.558423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.688 [2024-11-25 14:32:08.568542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.688 [2024-11-25 14:32:08.568559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.688 [2024-11-25 14:32:08.568566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.688 [2024-11-25 14:32:08.578504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.688 [2024-11-25 14:32:08.578523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.688 [2024-11-25 14:32:08.578529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:03.688 [2024-11-25 14:32:08.590268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.688 [2024-11-25 14:32:08.590285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.688 [2024-11-25 14:32:08.590292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.688 [2024-11-25 14:32:08.601704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.688 [2024-11-25 14:32:08.601723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.688 [2024-11-25 14:32:08.601729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.688 [2024-11-25 14:32:08.613745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.688 [2024-11-25 14:32:08.613763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.688 [2024-11-25 14:32:08.613769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.688 [2024-11-25 14:32:08.625881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.688 [2024-11-25 14:32:08.625900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.688 [2024-11-25 14:32:08.625906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:03.688 [2024-11-25 14:32:08.638474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.688 [2024-11-25 14:32:08.638492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.688 [2024-11-25 14:32:08.638498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.688 [2024-11-25 14:32:08.648109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.688 [2024-11-25 14:32:08.648128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.688 [2024-11-25 14:32:08.648134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.688 [2024-11-25 14:32:08.657080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.688 [2024-11-25 14:32:08.657099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.688 [2024-11-25 14:32:08.657105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.688 [2024-11-25 14:32:08.668306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.688 [2024-11-25 14:32:08.668325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.688 [2024-11-25 14:32:08.668331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:03.688 [2024-11-25 14:32:08.678793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.688 [2024-11-25 14:32:08.678812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.688 [2024-11-25 14:32:08.678818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.688 [2024-11-25 14:32:08.689614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.688 [2024-11-25 14:32:08.689633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.688 [2024-11-25 14:32:08.689639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.688 [2024-11-25 14:32:08.699919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.688 [2024-11-25 14:32:08.699937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.688 [2024-11-25 14:32:08.699943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.688 [2024-11-25 14:32:08.710233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.688 [2024-11-25 14:32:08.710251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.688 [2024-11-25 14:32:08.710257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:03.688 [2024-11-25 14:32:08.721644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.688 [2024-11-25 14:32:08.721662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.688 [2024-11-25 14:32:08.721668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.688 [2024-11-25 14:32:08.730658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.688 [2024-11-25 14:32:08.730676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.688 [2024-11-25 14:32:08.730685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.688 [2024-11-25 14:32:08.740833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.688 [2024-11-25 14:32:08.740852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.688 [2024-11-25 14:32:08.740858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.688 [2024-11-25 14:32:08.751361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.688 [2024-11-25 14:32:08.751379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.688 [2024-11-25 14:32:08.751386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:03.688 [2024-11-25 14:32:08.757553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.688 [2024-11-25 14:32:08.757572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.688 [2024-11-25 14:32:08.757578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.688 [2024-11-25 14:32:08.768313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.688 [2024-11-25 14:32:08.768331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.688 [2024-11-25 14:32:08.768338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.949 [2024-11-25 14:32:08.777991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.949 [2024-11-25 14:32:08.778009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.949 [2024-11-25 14:32:08.778015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.949 [2024-11-25 14:32:08.788644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.949 [2024-11-25 14:32:08.788662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.949 [2024-11-25 14:32:08.788669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:03.949 [2024-11-25 14:32:08.798556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.949 [2024-11-25 14:32:08.798575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.949 [2024-11-25 14:32:08.798581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.949 [2024-11-25 14:32:08.802847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.949 [2024-11-25 14:32:08.802865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.949 [2024-11-25 14:32:08.802871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.949 [2024-11-25 14:32:08.813295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.949 [2024-11-25 14:32:08.813313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.949 [2024-11-25 14:32:08.813319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.949 [2024-11-25 14:32:08.823318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.949 [2024-11-25 14:32:08.823336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.949 [2024-11-25 14:32:08.823343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:03.949 [2024-11-25 14:32:08.833342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.949 [2024-11-25 14:32:08.833360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.949 [2024-11-25 14:32:08.833366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.949 [2024-11-25 14:32:08.844485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.949 [2024-11-25 14:32:08.844503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.949 [2024-11-25 14:32:08.844509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.949 [2024-11-25 14:32:08.852132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.949 [2024-11-25 14:32:08.852150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.949 [2024-11-25 14:32:08.852157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.949 [2024-11-25 14:32:08.862791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.949 [2024-11-25 14:32:08.862809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.949 [2024-11-25 14:32:08.862815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:03.949 [2024-11-25 14:32:08.874095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.950 [2024-11-25 14:32:08.874113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.950 [2024-11-25 14:32:08.874119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.950 [2024-11-25 14:32:08.883510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.950 [2024-11-25 14:32:08.883528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.950 [2024-11-25 14:32:08.883534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.950 [2024-11-25 14:32:08.894387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.950 [2024-11-25 14:32:08.894405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.950 [2024-11-25 14:32:08.894415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.950 [2024-11-25 14:32:08.906044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.950 [2024-11-25 14:32:08.906062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.950 [2024-11-25 14:32:08.906068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:03.950 [2024-11-25 14:32:08.915199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.950 [2024-11-25 14:32:08.915217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.950 [2024-11-25 14:32:08.915223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.950 [2024-11-25 14:32:08.923350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.950 [2024-11-25 14:32:08.923368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.950 [2024-11-25 14:32:08.923374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.950 [2024-11-25 14:32:08.935144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.950 [2024-11-25 14:32:08.935174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.950 [2024-11-25 14:32:08.935181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.950 [2024-11-25 14:32:08.946467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.950 [2024-11-25 14:32:08.946486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.950 [2024-11-25 14:32:08.946492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:03.950 [2024-11-25 14:32:08.957001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.950 [2024-11-25 14:32:08.957019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.950 [2024-11-25 14:32:08.957025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.950 [2024-11-25 14:32:08.966196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.950 [2024-11-25 14:32:08.966214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.950 [2024-11-25 14:32:08.966221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.950 [2024-11-25 14:32:08.976506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.950 [2024-11-25 14:32:08.976524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.950 [2024-11-25 14:32:08.976531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.950 [2024-11-25 14:32:08.986346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.950 [2024-11-25 14:32:08.986368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.950 [2024-11-25 14:32:08.986374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:03.950 [2024-11-25 14:32:08.994944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.950 [2024-11-25 14:32:08.994963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.950 [2024-11-25 14:32:08.994969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.950 [2024-11-25 14:32:09.006800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.950 [2024-11-25 14:32:09.006819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.950 [2024-11-25 14:32:09.006826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.950 [2024-11-25 14:32:09.018864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.950 [2024-11-25 14:32:09.018882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.950 [2024-11-25 14:32:09.018889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.950 [2024-11-25 14:32:09.031179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:03.950 [2024-11-25 14:32:09.031197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.950 [2024-11-25 14:32:09.031203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:04.211 [2024-11-25 14:32:09.042506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.211 [2024-11-25 14:32:09.042525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.211 [2024-11-25 14:32:09.042531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:04.211 [2024-11-25 14:32:09.054605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.211 [2024-11-25 14:32:09.054624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.211 [2024-11-25 14:32:09.054630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:04.211 [2024-11-25 14:32:09.066622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.211 [2024-11-25 14:32:09.066640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.211 [2024-11-25 14:32:09.066646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:04.211 [2024-11-25 14:32:09.075212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.211 [2024-11-25 14:32:09.075231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.211 [2024-11-25 14:32:09.075237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:04.211 [2024-11-25 14:32:09.085474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.211 [2024-11-25 14:32:09.085492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.211 [2024-11-25 14:32:09.085499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:04.211 [2024-11-25 14:32:09.096263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.211 [2024-11-25 14:32:09.096281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.211 [2024-11-25 14:32:09.096287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:04.211 [2024-11-25 14:32:09.106728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.211 [2024-11-25 14:32:09.106746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.211 [2024-11-25 14:32:09.106752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:04.211 [2024-11-25 14:32:09.116737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.211 [2024-11-25 14:32:09.116755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.211 [2024-11-25 14:32:09.116761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:04.211 [2024-11-25 14:32:09.124342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.211 [2024-11-25 14:32:09.124361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.211 [2024-11-25 14:32:09.124367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:04.211 [2024-11-25 14:32:09.135234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.211 [2024-11-25 14:32:09.135253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.211 [2024-11-25 14:32:09.135259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:04.211 [2024-11-25 14:32:09.145972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.212 [2024-11-25 14:32:09.145991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.212 [2024-11-25 14:32:09.145997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:04.212 [2024-11-25 14:32:09.158002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.212 [2024-11-25 14:32:09.158019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.212 [2024-11-25 14:32:09.158026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:04.212 [2024-11-25 14:32:09.170122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.212 [2024-11-25 14:32:09.170140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.212 [2024-11-25 14:32:09.170150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:04.212 [2024-11-25 14:32:09.180790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.212 [2024-11-25 14:32:09.180809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.212 [2024-11-25 14:32:09.180815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:04.212 [2024-11-25 14:32:09.190973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.212 [2024-11-25 14:32:09.190992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.212 [2024-11-25 14:32:09.190998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:04.212 [2024-11-25 14:32:09.201827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.212 [2024-11-25 14:32:09.201844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.212 [2024-11-25 14:32:09.201851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:04.212 [2024-11-25 14:32:09.209591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.212 [2024-11-25 14:32:09.209609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.212 [2024-11-25 14:32:09.209615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:04.212 [2024-11-25 14:32:09.220115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.212 [2024-11-25 14:32:09.220134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.212 [2024-11-25 14:32:09.220141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:04.212 [2024-11-25 14:32:09.231708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.212 [2024-11-25 14:32:09.231726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.212 [2024-11-25 14:32:09.231733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:04.212 [2024-11-25 14:32:09.242909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.212 [2024-11-25 14:32:09.242928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.212 [2024-11-25 14:32:09.242934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:04.212 [2024-11-25 14:32:09.253564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.212 [2024-11-25 14:32:09.253583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.212 [2024-11-25 14:32:09.253589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:04.212 [2024-11-25 14:32:09.264945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.212 [2024-11-25 14:32:09.264964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.212 [2024-11-25 14:32:09.264970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:04.212 [2024-11-25 14:32:09.277348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.212 [2024-11-25 14:32:09.277367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.212 [2024-11-25 14:32:09.277373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:04.212 [2024-11-25 14:32:09.288505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.212 [2024-11-25 14:32:09.288524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.212 [2024-11-25 14:32:09.288531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:04.473 [2024-11-25 14:32:09.301029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.473 [2024-11-25 14:32:09.301048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.473 [2024-11-25 14:32:09.301055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:04.473 [2024-11-25 14:32:09.314063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.473 [2024-11-25 14:32:09.314081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.473 [2024-11-25 14:32:09.314087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:04.473 [2024-11-25 14:32:09.326478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.473 [2024-11-25 14:32:09.326496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.473 [2024-11-25 14:32:09.326503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:04.473 [2024-11-25 14:32:09.338981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.473 [2024-11-25 14:32:09.339000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.473 [2024-11-25 14:32:09.339006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:04.473 [2024-11-25 14:32:09.351176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.473 [2024-11-25 14:32:09.351194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.473 [2024-11-25 14:32:09.351201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:04.473 [2024-11-25 14:32:09.363993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.473 [2024-11-25 14:32:09.364011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.473 [2024-11-25 14:32:09.364020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:04.473 [2024-11-25 14:32:09.376609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.473 [2024-11-25 14:32:09.376628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.473 [2024-11-25 14:32:09.376635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:04.473 [2024-11-25 14:32:09.389052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.473 [2024-11-25 14:32:09.389072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.473 [2024-11-25 14:32:09.389078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:04.473 [2024-11-25 14:32:09.401394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.473 [2024-11-25 14:32:09.401412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.473 [2024-11-25 14:32:09.401418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:04.473 [2024-11-25 14:32:09.413781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.473 [2024-11-25 14:32:09.413799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.473 [2024-11-25 14:32:09.413805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:04.473 [2024-11-25 14:32:09.426040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.473 [2024-11-25 14:32:09.426058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.473 [2024-11-25 14:32:09.426064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:04.473 [2024-11-25 14:32:09.437275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.473 [2024-11-25 14:32:09.437294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.473 [2024-11-25 14:32:09.437300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:04.474 [2024-11-25 14:32:09.449308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.474 [2024-11-25 14:32:09.449327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.474 [2024-11-25 14:32:09.449333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:04.474 [2024-11-25 14:32:09.460779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.474 [2024-11-25 14:32:09.460796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.474 [2024-11-25 14:32:09.460803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:04.474 [2024-11-25 14:32:09.470851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.474 [2024-11-25 14:32:09.470873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.474 [2024-11-25 14:32:09.470879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:04.474 [2024-11-25 14:32:09.482134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.474 [2024-11-25 14:32:09.482153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.474 [2024-11-25 14:32:09.482163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:04.474 [2024-11-25 14:32:09.493146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.474 [2024-11-25 14:32:09.493170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.474 [2024-11-25 14:32:09.493177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:04.474 3146.00 IOPS, 393.25 MiB/s [2024-11-25T13:32:09.564Z] [2024-11-25 14:32:09.505362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22efa00) 00:34:04.474 [2024-11-25 14:32:09.505379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:04.474 [2024-11-25 14:32:09.505385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:04.474 00:34:04.474 Latency(us) 00:34:04.474 [2024-11-25T13:32:09.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.474 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:34:04.474 nvme0n1 : 2.05 3080.81 385.10 0.00 0.00 5091.42 614.40 48715.09 00:34:04.474 [2024-11-25T13:32:09.564Z] =================================================================================================================== 00:34:04.474 [2024-11-25T13:32:09.564Z] Total : 3080.81 385.10 0.00 0.00 5091.42 614.40 48715.09 00:34:04.474 { 00:34:04.474 "results": [ 00:34:04.474 { 00:34:04.474 "job": "nvme0n1", 00:34:04.474 "core_mask": "0x2", 00:34:04.474 "workload": "randread", 00:34:04.474 "status": "finished", 00:34:04.474 "queue_depth": 16, 00:34:04.474 "io_size": 131072, 00:34:04.474 "runtime": 2.047513, 00:34:04.474 "iops": 3080.810720127296, 00:34:04.474 "mibps": 385.101340015912, 00:34:04.474 "io_failed": 0, 00:34:04.474 "io_timeout": 0, 00:34:04.474 "avg_latency_us": 5091.416850560135, 00:34:04.474 "min_latency_us": 614.4, 00:34:04.474 "max_latency_us": 48715.09333333333 00:34:04.474 } 00:34:04.474 ], 00:34:04.474 "core_count": 1 00:34:04.474 } 00:34:04.735 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:04.735 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:04.735 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:04.735 | .driver_specific 00:34:04.735 | .nvme_error 00:34:04.735 | .status_code 00:34:04.735 | .command_transient_transport_error' 00:34:04.735 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:04.735 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 204 > 0 )) 00:34:04.735 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3610245 00:34:04.735 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3610245 ']' 00:34:04.735 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3610245 00:34:04.735 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:34:04.735 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:04.735 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3610245 00:34:04.735 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:04.735 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:04.735 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3610245' 00:34:04.735 killing process with pid 3610245 00:34:04.735 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3610245 00:34:04.735 Received shutdown signal, test time was about 2.000000 seconds 00:34:04.735 00:34:04.735 Latency(us) 00:34:04.735 [2024-11-25T13:32:09.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.735 [2024-11-25T13:32:09.825Z] =================================================================================================================== 00:34:04.735 [2024-11-25T13:32:09.825Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:04.735 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3610245 00:34:04.997 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:34:04.997 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:04.997 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:04.997 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:34:04.997 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:34:04.997 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3611009 00:34:04.997 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3611009 /var/tmp/bperf.sock 00:34:04.997 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3611009 ']' 00:34:04.997 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:34:04.997 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:04.997 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:04.997 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:04.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:04.997 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:04.997 14:32:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:04.997 [2024-11-25 14:32:09.977288] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:34:04.997 [2024-11-25 14:32:09.977342] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3611009 ] 00:34:04.997 [2024-11-25 14:32:10.062051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.257 [2024-11-25 14:32:10.091777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:05.826 14:32:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:05.826 14:32:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:34:05.826 14:32:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:05.826 14:32:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:06.086 14:32:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:06.086 14:32:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.086 14:32:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:06.086 14:32:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.086 14:32:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:06.086 14:32:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:06.346 nvme0n1 00:34:06.346 14:32:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:34:06.346 14:32:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.346 14:32:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:06.346 14:32:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.346 14:32:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:06.346 14:32:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:06.346 Running I/O for 2 seconds... 00:34:06.607 [2024-11-25 14:32:11.436463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166eb328 00:34:06.607 [2024-11-25 14:32:11.437393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.607 [2024-11-25 14:32:11.437421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.607 [2024-11-25 14:32:11.444977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ea248 00:34:06.607 [2024-11-25 14:32:11.445947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.607 [2024-11-25 14:32:11.445964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.607 [2024-11-25 14:32:11.453475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e9168 00:34:06.607 [2024-11-25 14:32:11.454391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.607 [2024-11-25 14:32:11.454408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.607 [2024-11-25 14:32:11.461935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e8088 00:34:06.607 [2024-11-25 14:32:11.462882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.607 [2024-11-25 14:32:11.462899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.607 [2024-11-25 14:32:11.470418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e6fa8 00:34:06.607 [2024-11-25 14:32:11.471368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.607 [2024-11-25 14:32:11.471385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.607 [2024-11-25 14:32:11.478888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e5ec8 00:34:06.607 [2024-11-25 14:32:11.479832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.607 [2024-11-25 14:32:11.479848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.607 [2024-11-25 14:32:11.487346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e12d8 00:34:06.607 [2024-11-25 14:32:11.488280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.607 [2024-11-25 14:32:11.488297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.607 [2024-11-25 14:32:11.495804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e23b8 00:34:06.607 [2024-11-25 14:32:11.496737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.607 [2024-11-25 14:32:11.496753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.607 [2024-11-25 14:32:11.504260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e3498 00:34:06.607 [2024-11-25 14:32:11.505185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.607 [2024-11-25 14:32:11.505201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.607 [2024-11-25 14:32:11.512718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e4578 00:34:06.607 [2024-11-25 14:32:11.513682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.607 [2024-11-25 14:32:11.513698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.607 [2024-11-25 14:32:11.521156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f2510 00:34:06.607 [2024-11-25 14:32:11.522112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.607 [2024-11-25 14:32:11.522128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.607 [2024-11-25 14:32:11.529612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f1430 00:34:06.607 [2024-11-25 14:32:11.530508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.607 [2024-11-25 14:32:11.530524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.607 [2024-11-25 14:32:11.538033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f0350 00:34:06.607 [2024-11-25 14:32:11.538968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.607 [2024-11-25 14:32:11.538984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.607 [2024-11-25 14:32:11.546471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ef270 00:34:06.607 [2024-11-25 14:32:11.547380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.607 [2024-11-25 14:32:11.547396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.608 [2024-11-25 14:32:11.554879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ee190 00:34:06.608 [2024-11-25 14:32:11.555833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.608 [2024-11-25 14:32:11.555849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.608 [2024-11-25 14:32:11.563288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ed0b0 00:34:06.608 [2024-11-25 14:32:11.564225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.608 [2024-11-25 14:32:11.564241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.608 [2024-11-25 14:32:11.571721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ebfd0 00:34:06.608 [2024-11-25 14:32:11.572617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.608 [2024-11-25 14:32:11.572633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.608 [2024-11-25 14:32:11.580150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166eaef0 00:34:06.608 [2024-11-25 14:32:11.581087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.608 [2024-11-25 14:32:11.581103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.608 [2024-11-25 14:32:11.588576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e9e10 00:34:06.608 [2024-11-25 14:32:11.589485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.608 [2024-11-25 14:32:11.589501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.608 [2024-11-25 14:32:11.596984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e8d30 00:34:06.608 [2024-11-25 14:32:11.597919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.608 [2024-11-25 14:32:11.597935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.608 [2024-11-25 14:32:11.605413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e7c50 00:34:06.608 [2024-11-25 14:32:11.606334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.608 [2024-11-25 14:32:11.606350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.608 [2024-11-25 14:32:11.613830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e6b70 00:34:06.608 [2024-11-25 14:32:11.614757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.608 [2024-11-25 14:32:11.614776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.608 [2024-11-25 14:32:11.622267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e5a90 00:34:06.608 [2024-11-25 14:32:11.623216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.608 [2024-11-25 14:32:11.623233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.608 [2024-11-25 14:32:11.630691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e1710 00:34:06.608 [2024-11-25 14:32:11.631618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.608 [2024-11-25 14:32:11.631634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.608 [2024-11-25 14:32:11.639117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e27f0 00:34:06.608 [2024-11-25 14:32:11.640054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.608 [2024-11-25 14:32:11.640070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.608 [2024-11-25 14:32:11.647527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e38d0 00:34:06.608 [2024-11-25 14:32:11.648458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:20137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.608 [2024-11-25 14:32:11.648473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.608 [2024-11-25 14:32:11.655949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e49b0 00:34:06.608 [2024-11-25 14:32:11.656843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.608 [2024-11-25 14:32:11.656859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.608 [2024-11-25 14:32:11.664674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166feb58 00:34:06.608 [2024-11-25 14:32:11.665715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.608 [2024-11-25 14:32:11.665730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.608 [2024-11-25 14:32:11.674412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fe2e8 00:34:06.608 [2024-11-25 14:32:11.675886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.608 [2024-11-25 14:32:11.675902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:06.608 [2024-11-25 14:32:11.680392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e5220 00:34:06.608 [2024-11-25 14:32:11.681073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.608 [2024-11-25 14:32:11.681088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:06.608 [2024-11-25 14:32:11.688938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e4140 00:34:06.608 [2024-11-25 14:32:11.689650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.608 [2024-11-25 14:32:11.689666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.869 [2024-11-25 14:32:11.697376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f6458 00:34:06.869 [2024-11-25 14:32:11.698030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.869 [2024-11-25 14:32:11.698046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.869 [2024-11-25 14:32:11.705797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f81e0 00:34:06.869 [2024-11-25 14:32:11.706500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.869 [2024-11-25 14:32:11.706515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.869 [2024-11-25 14:32:11.714219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f92c0 00:34:06.869 [2024-11-25 14:32:11.714929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.714945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.722655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fa3a0 00:34:06.870 [2024-11-25 14:32:11.723339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.723355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.731066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fac10 00:34:06.870 [2024-11-25 14:32:11.731768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.731783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.739480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e2c28 00:34:06.870 [2024-11-25 14:32:11.740174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.740190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.747894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e1b48 00:34:06.870 [2024-11-25 14:32:11.748594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.748610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.756333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e5658 00:34:06.870 [2024-11-25 14:32:11.757036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.757052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.764786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e6738 00:34:06.870 [2024-11-25 14:32:11.765488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.765504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.773260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e8088 00:34:06.870 [2024-11-25 14:32:11.773956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.773972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.781669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ed920 00:34:06.870 [2024-11-25 14:32:11.782376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.782392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.790084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166eea00 00:34:06.870 [2024-11-25 14:32:11.790757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.790773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.798499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166efae0 00:34:06.870 [2024-11-25 14:32:11.799200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.799215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.806928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f0bc0 00:34:06.870 [2024-11-25 14:32:11.807627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.807644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.815356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f1ca0 00:34:06.870 [2024-11-25 14:32:11.816068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.816083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.823763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e4de8 00:34:06.870 [2024-11-25 14:32:11.824458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.824474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.832168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e3d08 00:34:06.870 [2024-11-25 14:32:11.832857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.832875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.840569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f7538 00:34:06.870 [2024-11-25 14:32:11.841286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.841301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.848995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f8618 00:34:06.870 [2024-11-25 14:32:11.849701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.849717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.857418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f96f8 00:34:06.870 [2024-11-25 14:32:11.858125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.858141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.865853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fb048 00:34:06.870 [2024-11-25 14:32:11.866559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.866575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.873718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ff3c8 00:34:06.870 [2024-11-25 14:32:11.874399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.874415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.882954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fd208 00:34:06.870 [2024-11-25 14:32:11.883754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.883770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.891370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fcdd0 00:34:06.870 [2024-11-25 14:32:11.892169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.892184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.900008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e6300 00:34:06.870 [2024-11-25 14:32:11.900773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.900789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.908445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fd640 00:34:06.870 [2024-11-25 14:32:11.909218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.909235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.916902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ec408 00:34:06.870 [2024-11-25 14:32:11.917676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.917692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.925323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166edd58 00:34:06.870 [2024-11-25 14:32:11.926089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.926105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.933760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e73e0 00:34:06.870 [2024-11-25 14:32:11.934527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.870 [2024-11-25 14:32:11.934544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.870 [2024-11-25 14:32:11.942231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e6300 00:34:06.870 [2024-11-25 14:32:11.942990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.871 [2024-11-25 14:32:11.943007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.871 [2024-11-25 14:32:11.950680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fd640 00:34:06.871 [2024-11-25 14:32:11.951452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.871 [2024-11-25 14:32:11.951468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:07.131 [2024-11-25 14:32:11.959119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ec408 00:34:07.131 [2024-11-25 14:32:11.959907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.131 [2024-11-25 14:32:11.959923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:07.131 [2024-11-25 14:32:11.967577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166edd58 00:34:07.131 [2024-11-25 14:32:11.968355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.131 [2024-11-25 14:32:11.968371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:07.131 [2024-11-25 14:32:11.976000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e73e0 00:34:07.131 [2024-11-25 14:32:11.976776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.131 [2024-11-25 14:32:11.976792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:07.131 [2024-11-25 14:32:11.984438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e6300 00:34:07.131 [2024-11-25 14:32:11.985209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.132 [2024-11-25 14:32:11.985226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:07.132 [2024-11-25 14:32:11.992887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fd640 00:34:07.132 [2024-11-25 14:32:11.993651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.132 [2024-11-25 14:32:11.993668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:07.132 [2024-11-25 14:32:12.001330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ec408 00:34:07.132 [2024-11-25 14:32:12.002114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.132 [2024-11-25 14:32:12.002130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:07.132 [2024-11-25 14:32:12.009786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166edd58 00:34:07.132 [2024-11-25 14:32:12.010519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.132 [2024-11-25 14:32:12.010535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:07.132 [2024-11-25 14:32:12.018202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e73e0 00:34:07.132 [2024-11-25 14:32:12.018965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.132 [2024-11-25 14:32:12.018981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:07.132 [2024-11-25 14:32:12.026641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e6300 00:34:07.132 [2024-11-25 14:32:12.027409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.132 [2024-11-25 14:32:12.027424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:07.132 [2024-11-25 14:32:12.035101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fd640 00:34:07.132 [2024-11-25 14:32:12.035872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.132 [2024-11-25 14:32:12.035888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:07.132 [2024-11-25 14:32:12.043757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f9b30 00:34:07.132 [2024-11-25 14:32:12.044635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.132 [2024-11-25 14:32:12.044651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:07.132 [2024-11-25 14:32:12.052322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f8a50 00:34:07.132 [2024-11-25 14:32:12.053255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.132 [2024-11-25 14:32:12.053274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:07.132 [2024-11-25 14:32:12.060728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166de8a8 00:34:07.132 [2024-11-25 14:32:12.061653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.132 [2024-11-25 14:32:12.061668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:07.132 [2024-11-25 14:32:12.069146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fe2e8 00:34:07.132 [2024-11-25 14:32:12.070083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.132 [2024-11-25 14:32:12.070098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:07.132 [2024-11-25 14:32:12.077588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e99d8 00:34:07.132 [2024-11-25 14:32:12.078530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.132 [2024-11-25 14:32:12.078546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:07.132 [2024-11-25 14:32:12.086209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166eaab8 00:34:07.132 [2024-11-25 14:32:12.087153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.132 [2024-11-25 14:32:12.087172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:07.132 [2024-11-25 14:32:12.094663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ebb98 00:34:07.132 [2024-11-25 14:32:12.095586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.132 [2024-11-25 14:32:12.095602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:07.132 [2024-11-25 14:32:12.103089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f6cc8 00:34:07.132 [2024-11-25 14:32:12.104025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.132 [2024-11-25 14:32:12.104041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:07.132 [2024-11-25 14:32:12.111507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e12d8 00:34:07.132 [2024-11-25 14:32:12.112432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.132 [2024-11-25 14:32:12.112448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:07.132 [2024-11-25 14:32:12.119924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e95a0 00:34:07.132 [2024-11-25 14:32:12.120809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.132 [2024-11-25 14:32:12.120825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:07.132 [2024-11-25 14:32:12.128349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f20d8 00:34:07.132 [2024-11-25 14:32:12.129296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.132 [2024-11-25 14:32:12.129312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:07.132 [2024-11-25 14:32:12.136781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e4140 00:34:07.132 [2024-11-25 14:32:12.137715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.132 [2024-11-25 14:32:12.137731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:07.132 [2024-11-25 14:32:12.145205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f6458 00:34:07.132 [2024-11-25 14:32:12.146130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.132 [2024-11-25 14:32:12.146146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:07.132 [2024-11-25 14:32:12.153608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f81e0 00:34:07.132 [2024-11-25 14:32:12.154534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.132 [2024-11-25 14:32:12.154550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:07.132 [2024-11-25 14:32:12.161460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f3e60 00:34:07.132 [2024-11-25 14:32:12.162356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.132 [2024-11-25 14:32:12.162372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:07.132 [2024-11-25 14:32:12.170724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f1ca0 00:34:07.133 [2024-11-25 14:32:12.171734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.133 [2024-11-25 14:32:12.171750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:07.133 [2024-11-25 14:32:12.177955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f35f0 00:34:07.133 [2024-11-25 14:32:12.178644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.133 [2024-11-25 14:32:12.178659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:07.133 [2024-11-25 14:32:12.186391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e0ea0 00:34:07.133 [2024-11-25 14:32:12.187078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.133 [2024-11-25 14:32:12.187094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:07.133 [2024-11-25 14:32:12.194891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f20d8 00:34:07.133 [2024-11-25 14:32:12.195583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.133 [2024-11-25 14:32:12.195598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:07.133 [2024-11-25 14:32:12.203308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f35f0 00:34:07.133 [2024-11-25 14:32:12.203992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.133 [2024-11-25 14:32:12.204008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:07.133 [2024-11-25 14:32:12.212287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f92c0 00:34:07.133 [2024-11-25 14:32:12.213194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.133 [2024-11-25 14:32:12.213210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:07.395 [2024-11-25 14:32:12.220313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fa3a0 00:34:07.395 [2024-11-25 14:32:12.221024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.395 [2024-11-25 14:32:12.221040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:07.395 [2024-11-25 14:32:12.228757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f5be8 00:34:07.395 [2024-11-25 14:32:12.229458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.395 [2024-11-25 14:32:12.229475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:07.395 [2024-11-25 14:32:12.237203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fb8b8 00:34:07.395 [2024-11-25 14:32:12.237899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.395 [2024-11-25 14:32:12.237915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:07.395 [2024-11-25 14:32:12.245636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fd640 00:34:07.395 [2024-11-25 14:32:12.246340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.395 [2024-11-25 14:32:12.246355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:07.395 [2024-11-25 14:32:12.254041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f4b08 00:34:07.395 [2024-11-25 14:32:12.254732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.395 [2024-11-25 14:32:12.254748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:07.395 [2024-11-25 14:32:12.262487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e84c0 00:34:07.395 [2024-11-25 14:32:12.263204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.395 [2024-11-25 14:32:12.263219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:07.395 [2024-11-25 14:32:12.270920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e6b70 00:34:07.395 [2024-11-25 14:32:12.271639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.395 [2024-11-25 14:32:12.271655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:07.395 [2024-11-25 14:32:12.279360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fcdd0 00:34:07.395 [2024-11-25 14:32:12.280058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.395 [2024-11-25 14:32:12.280074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:07.395 [2024-11-25 14:32:12.287763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e01f8 00:34:07.395 [2024-11-25 14:32:12.288467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.395 [2024-11-25 14:32:12.288483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:07.395 [2024-11-25 14:32:12.296173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166eee38 00:34:07.395 [2024-11-25 14:32:12.296867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.395 [2024-11-25 14:32:12.296883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:07.395 [2024-11-25 14:32:12.304588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f7970 00:34:07.395 [2024-11-25 14:32:12.305308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.395 [2024-11-25 14:32:12.305324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:07.395 [2024-11-25 14:32:12.313036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e1f80 00:34:07.395 [2024-11-25 14:32:12.313764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.395 [2024-11-25 14:32:12.313780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:07.395 [2024-11-25 14:32:12.321504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e3060 00:34:07.395 [2024-11-25 14:32:12.322216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.395 [2024-11-25 14:32:12.322232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:07.395 [2024-11-25 14:32:12.329927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fdeb0 00:34:07.395 [2024-11-25 14:32:12.330630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.395 [2024-11-25 14:32:12.330646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:07.395 [2024-11-25 14:32:12.338338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166de470 00:34:07.395 [2024-11-25 14:32:12.339036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.396 [2024-11-25 14:32:12.339051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:07.396 [2024-11-25 14:32:12.346771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f8618 00:34:07.396 [2024-11-25 14:32:12.347440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.396 [2024-11-25 14:32:12.347459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:07.396 [2024-11-25 14:32:12.355201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f96f8 00:34:07.396 [2024-11-25 14:32:12.355892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.396 [2024-11-25 14:32:12.355908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:07.396 [2024-11-25 14:32:12.363628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fb048 00:34:07.396 [2024-11-25 14:32:12.364329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.396 [2024-11-25 14:32:12.364344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:07.396 [2024-11-25 14:32:12.372058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ed4e8 00:34:07.396 [2024-11-25 14:32:12.372758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.396 [2024-11-25 14:32:12.372774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:07.396 [2024-11-25 14:32:12.380495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fbcf0 00:34:07.396 [2024-11-25 14:32:12.381165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.396 [2024-11-25 14:32:12.381181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:07.396 [2024-11-25 14:32:12.388917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ff3c8 00:34:07.396 [2024-11-25 14:32:12.389624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.396 [2024-11-25 14:32:12.389641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:07.396 [2024-11-25 14:32:12.397349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f4f40 00:34:07.396 [2024-11-25 14:32:12.398051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.396 [2024-11-25 14:32:12.398066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:07.396 [2024-11-25 14:32:12.405802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e8088 00:34:07.396 [2024-11-25 14:32:12.406517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.396 [2024-11-25 14:32:12.406533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:07.396 [2024-11-25 14:32:12.414244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e6738 00:34:07.396 [2024-11-25 14:32:12.414952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.396 [2024-11-25 14:32:12.414968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:07.396 [2024-11-25 14:32:12.422636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e0ea0 00:34:07.396 29954.00 IOPS, 117.01 MiB/s [2024-11-25T13:32:12.486Z] [2024-11-25 14:32:12.423485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.396 [2024-11-25 14:32:12.423500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:07.396 [2024-11-25 14:32:12.431136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166eea00 00:34:07.396 [2024-11-25 14:32:12.431828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.396 [2024-11-25 14:32:12.431844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.396 [2024-11-25 14:32:12.439560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166eee38 00:34:07.396 [2024-11-25 14:32:12.440256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.396 [2024-11-25 14:32:12.440273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.396 [2024-11-25 14:32:12.447993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f6890 00:34:07.396 [2024-11-25 14:32:12.448659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.396 [2024-11-25 14:32:12.448675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.396 [2024-11-25 14:32:12.456432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f7da8 00:34:07.396 [2024-11-25 14:32:12.457129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.396 [2024-11-25 14:32:12.457144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.396 [2024-11-25 14:32:12.464862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e2c28 00:34:07.396 [2024-11-25 14:32:12.465549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.396 [2024-11-25 14:32:12.465565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.396 [2024-11-25 14:32:12.473292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fac10 00:34:07.396 [2024-11-25 14:32:12.473988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.396 [2024-11-25 14:32:12.474004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.396 [2024-11-25 14:32:12.481715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166de8a8 00:34:07.396 [2024-11-25 14:32:12.482405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.396 [2024-11-25 14:32:12.482421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.658 [2024-11-25 14:32:12.490147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f8a50 00:34:07.658 [2024-11-25 14:32:12.490861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.658 [2024-11-25 14:32:12.490880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.658 [2024-11-25 14:32:12.498567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f9b30 00:34:07.658 [2024-11-25 14:32:12.499229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.658 [2024-11-25 14:32:12.499245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.658 [2024-11-25 14:32:12.506990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fb480 00:34:07.658 [2024-11-25 14:32:12.507680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.658 [2024-11-25 14:32:12.507696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.658 [2024-11-25 14:32:12.515407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ee190 00:34:07.658 [2024-11-25 14:32:12.516102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.658 [2024-11-25 14:32:12.516118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.658 [2024-11-25 14:32:12.523816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fc128 00:34:07.658 [2024-11-25 14:32:12.524516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.658 [2024-11-25 14:32:12.524532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.658 [2024-11-25 14:32:12.532249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166feb58 00:34:07.659 [2024-11-25 14:32:12.532938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.659 [2024-11-25 14:32:12.532954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.659 [2024-11-25 14:32:12.540688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f5378 00:34:07.659 [2024-11-25 14:32:12.541389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.659 [2024-11-25 14:32:12.541405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.659 [2024-11-25 14:32:12.549133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e8d30 00:34:07.659 [2024-11-25 14:32:12.549828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:20334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.659 [2024-11-25 14:32:12.549844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.659 [2024-11-25 14:32:12.557588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166edd58 00:34:07.659 [2024-11-25 14:32:12.558279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.659 [2024-11-25 14:32:12.558295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.659 [2024-11-25 14:32:12.566008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e0a68 00:34:07.659 [2024-11-25 14:32:12.566708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.659 [2024-11-25 14:32:12.566727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.659 [2024-11-25 14:32:12.574435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ee5c8 00:34:07.659 [2024-11-25 14:32:12.575119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.659 [2024-11-25 14:32:12.575136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.659 [2024-11-25 14:32:12.582865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f7970 00:34:07.659 [2024-11-25 14:32:12.583561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.659 [2024-11-25 14:32:12.583577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.659 [2024-11-25 14:32:12.591307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e1f80 00:34:07.659 [2024-11-25 14:32:12.591998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.659 [2024-11-25 14:32:12.592014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.659 [2024-11-25 14:32:12.599743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e3060 00:34:07.659 [2024-11-25 14:32:12.600440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.659 [2024-11-25 14:32:12.600456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.659 [2024-11-25 14:32:12.608172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fdeb0 00:34:07.659 [2024-11-25 14:32:12.608873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.659 [2024-11-25 14:32:12.608889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.659 [2024-11-25 14:32:12.616607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166de470 00:34:07.659 [2024-11-25 14:32:12.617294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.659 [2024-11-25 14:32:12.617310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.659 [2024-11-25 14:32:12.625030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f8618 00:34:07.659 [2024-11-25 14:32:12.625724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.659 [2024-11-25 14:32:12.625740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.659 [2024-11-25 14:32:12.633474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f96f8 00:34:07.659 [2024-11-25 14:32:12.634148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.659 [2024-11-25 14:32:12.634177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.659 [2024-11-25 14:32:12.641928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fb048 00:34:07.659 [2024-11-25 14:32:12.642630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.659 [2024-11-25 14:32:12.642645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.659 [2024-11-25 14:32:12.650351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ed4e8 00:34:07.659 [2024-11-25 14:32:12.651051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.659 [2024-11-25 14:32:12.651067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.659 [2024-11-25 14:32:12.658770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fbcf0 00:34:07.659 [2024-11-25 14:32:12.659464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.659 [2024-11-25 14:32:12.659480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.659 [2024-11-25 14:32:12.667191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ff3c8 00:34:07.659 [2024-11-25 14:32:12.667875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.659 [2024-11-25 14:32:12.667891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.659 [2024-11-25 14:32:12.675615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f4f40 00:34:07.659 [2024-11-25 14:32:12.676317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.659 [2024-11-25 14:32:12.676333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.659 [2024-11-25 14:32:12.684058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e8088 00:34:07.659 [2024-11-25 14:32:12.684766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.659 [2024-11-25 14:32:12.684782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.659 [2024-11-25 14:32:12.692488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e6738 00:34:07.659 [2024-11-25 14:32:12.693194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.659 [2024-11-25 14:32:12.693210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.659 [2024-11-25 14:32:12.700972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e0ea0 00:34:07.659 [2024-11-25 14:32:12.701663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.659 [2024-11-25 14:32:12.701679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.659 [2024-11-25 14:32:12.709421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166eea00 00:34:07.659 [2024-11-25 14:32:12.710075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.659 [2024-11-25 14:32:12.710091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.659 [2024-11-25 14:32:12.717839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166eee38 00:34:07.659 [2024-11-25 14:32:12.718538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.659 [2024-11-25 14:32:12.718554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.659 [2024-11-25 14:32:12.726272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f6890 00:34:07.659 [2024-11-25 14:32:12.726964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:6786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.659 [2024-11-25 14:32:12.726979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.659 [2024-11-25 14:32:12.734717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f7da8 00:34:07.659 [2024-11-25 14:32:12.735426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.659 [2024-11-25 14:32:12.735442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.659 [2024-11-25 14:32:12.743141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e2c28 00:34:07.659 [2024-11-25 14:32:12.743835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.659 [2024-11-25 14:32:12.743851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.922 [2024-11-25 14:32:12.751563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fac10 00:34:07.922 [2024-11-25 14:32:12.752209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.922 [2024-11-25 14:32:12.752225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.922 [2024-11-25 14:32:12.759977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166de8a8 00:34:07.922 [2024-11-25 14:32:12.760631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.922 [2024-11-25 14:32:12.760646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.922 [2024-11-25 14:32:12.768415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f8a50 00:34:07.922 [2024-11-25 14:32:12.769120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.922 [2024-11-25 14:32:12.769136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.922 [2024-11-25 14:32:12.776862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f9b30 00:34:07.922 [2024-11-25 14:32:12.777553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.922 [2024-11-25 14:32:12.777569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.922 [2024-11-25 14:32:12.785310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fb480 00:34:07.922 [2024-11-25 14:32:12.785957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.922 [2024-11-25 14:32:12.785980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.922 [2024-11-25 14:32:12.793737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ee190 00:34:07.922 [2024-11-25 14:32:12.794416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.922 [2024-11-25 14:32:12.794432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.922 [2024-11-25 14:32:12.802165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fc128 00:34:07.922 [2024-11-25 14:32:12.802855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.922 [2024-11-25 14:32:12.802871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.922 [2024-11-25 14:32:12.810626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166feb58 00:34:07.922 [2024-11-25 14:32:12.811323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.922 [2024-11-25 14:32:12.811339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.922 [2024-11-25 14:32:12.819072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f5378 00:34:07.922 [2024-11-25 14:32:12.819751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.922 [2024-11-25 14:32:12.819767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.922 [2024-11-25 14:32:12.827535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e8d30 00:34:07.922 [2024-11-25 14:32:12.828253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.922 [2024-11-25 14:32:12.828269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.922 [2024-11-25 14:32:12.835968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166edd58 00:34:07.922 [2024-11-25 14:32:12.836654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.922 [2024-11-25 14:32:12.836670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.922 [2024-11-25 14:32:12.844394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e0a68 00:34:07.922 [2024-11-25 14:32:12.845100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.922 [2024-11-25 14:32:12.845116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.922 [2024-11-25 14:32:12.852825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ee5c8 00:34:07.922 [2024-11-25 14:32:12.853519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.922 [2024-11-25 14:32:12.853535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.922 [2024-11-25 14:32:12.861255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f7970 00:34:07.922 [2024-11-25 14:32:12.861965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.922 [2024-11-25 14:32:12.861980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.922 [2024-11-25 14:32:12.869712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e1f80 00:34:07.922 [2024-11-25 14:32:12.870433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.922 [2024-11-25 14:32:12.870448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.922 [2024-11-25 14:32:12.878135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e3060 00:34:07.922 [2024-11-25 14:32:12.878837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.922 [2024-11-25 14:32:12.878853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.922 [2024-11-25 14:32:12.886579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fdeb0 00:34:07.922 [2024-11-25 14:32:12.887240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.922 [2024-11-25 14:32:12.887256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.922 [2024-11-25 14:32:12.895032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166de470 00:34:07.922 [2024-11-25 14:32:12.895718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.922 [2024-11-25 14:32:12.895734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.922 [2024-11-25 14:32:12.903480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f8618 00:34:07.922 [2024-11-25 14:32:12.904181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.922 [2024-11-25 14:32:12.904197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.923 [2024-11-25 14:32:12.911939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f96f8 00:34:07.923 [2024-11-25 14:32:12.912652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.923 [2024-11-25 14:32:12.912668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.923 [2024-11-25 14:32:12.920397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fb048 00:34:07.923 [2024-11-25 14:32:12.921060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.923 [2024-11-25 14:32:12.921076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.923 [2024-11-25 14:32:12.928807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ed4e8 00:34:07.923 [2024-11-25 14:32:12.929500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.923 [2024-11-25 14:32:12.929516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.923 [2024-11-25 14:32:12.937228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fbcf0 00:34:07.923 [2024-11-25 14:32:12.937917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.923 [2024-11-25 14:32:12.937933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.923 [2024-11-25 14:32:12.945648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ff3c8 00:34:07.923 [2024-11-25 14:32:12.946355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.923 [2024-11-25 14:32:12.946371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.923 [2024-11-25 14:32:12.954090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f4f40 00:34:07.923 [2024-11-25 14:32:12.954790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.923 [2024-11-25 14:32:12.954806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.923 [2024-11-25 14:32:12.962530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e8088 00:34:07.923 [2024-11-25 14:32:12.963232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.923 [2024-11-25 14:32:12.963248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.923 [2024-11-25 14:32:12.970965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e6738 00:34:07.923 [2024-11-25 14:32:12.971678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.923 [2024-11-25 14:32:12.971694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.923 [2024-11-25 14:32:12.979407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e0ea0 00:34:07.923 [2024-11-25 14:32:12.980108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.923 [2024-11-25 14:32:12.980124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.923 [2024-11-25 14:32:12.987837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166eea00 00:34:07.923 [2024-11-25 14:32:12.988524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.923 [2024-11-25 14:32:12.988541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.923 [2024-11-25 14:32:12.996275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166eee38 00:34:07.923 [2024-11-25 14:32:12.996966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.923 [2024-11-25 14:32:12.996982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:07.923 [2024-11-25 14:32:13.004712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f6890 00:34:07.923 [2024-11-25 14:32:13.005425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:07.923 [2024-11-25 14:32:13.005444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.185 [2024-11-25 14:32:13.013165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f7da8 00:34:08.185 [2024-11-25 14:32:13.013856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.185 [2024-11-25 14:32:13.013873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.185 [2024-11-25 14:32:13.021596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e2c28 00:34:08.185 [2024-11-25 14:32:13.022295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.185 [2024-11-25 14:32:13.022311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.185 [2024-11-25 14:32:13.030017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fac10 00:34:08.185 [2024-11-25 14:32:13.030714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.185 [2024-11-25 14:32:13.030730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.185 [2024-11-25 14:32:13.038448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166de8a8 00:34:08.185 [2024-11-25 14:32:13.039138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.185 [2024-11-25 14:32:13.039153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.185 [2024-11-25 14:32:13.046875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f8a50 00:34:08.185 [2024-11-25 14:32:13.047537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.185 [2024-11-25 14:32:13.047553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.185 [2024-11-25 14:32:13.055338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f9b30 00:34:08.185 [2024-11-25 14:32:13.056026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.185 [2024-11-25 14:32:13.056041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.185 [2024-11-25 14:32:13.063768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fb480 00:34:08.185 [2024-11-25 14:32:13.064459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.185 [2024-11-25 14:32:13.064475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.185 [2024-11-25 14:32:13.072190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ee190 00:34:08.185 [2024-11-25 14:32:13.072851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.185 [2024-11-25 14:32:13.072867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.185 [2024-11-25 14:32:13.080610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fc128 00:34:08.185 [2024-11-25 14:32:13.081305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.185 [2024-11-25 14:32:13.081321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.185 [2024-11-25 14:32:13.089210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166feb58 00:34:08.185 [2024-11-25 14:32:13.089919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.185 [2024-11-25 14:32:13.089935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.185 [2024-11-25 14:32:13.097651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f5378 00:34:08.185 [2024-11-25 14:32:13.098310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.185 [2024-11-25 14:32:13.098326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.185 [2024-11-25 14:32:13.106107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e8d30 00:34:08.185 [2024-11-25 14:32:13.106798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.185 [2024-11-25 14:32:13.106814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.185 [2024-11-25 14:32:13.114531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166edd58 00:34:08.185 [2024-11-25 14:32:13.115223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.185 [2024-11-25 14:32:13.115239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.185 [2024-11-25 14:32:13.122944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e0a68 00:34:08.185 [2024-11-25 14:32:13.123651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.185 [2024-11-25 14:32:13.123667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.185 [2024-11-25 14:32:13.131387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ee5c8 00:34:08.185 [2024-11-25 14:32:13.132080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.185 [2024-11-25 14:32:13.132096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.185 [2024-11-25 14:32:13.139819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f7970 00:34:08.185 [2024-11-25 14:32:13.140520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.185 [2024-11-25 14:32:13.140536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.186 [2024-11-25 14:32:13.148259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e1f80 00:34:08.186 [2024-11-25 14:32:13.148906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.186 [2024-11-25 14:32:13.148921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.186 [2024-11-25 14:32:13.156686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e3060 00:34:08.186 [2024-11-25 14:32:13.157376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.186 [2024-11-25 14:32:13.157392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.186 [2024-11-25 14:32:13.165108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fdeb0 00:34:08.186 [2024-11-25 14:32:13.165811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.186 [2024-11-25 14:32:13.165827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.186 [2024-11-25 14:32:13.173535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166de470 00:34:08.186 [2024-11-25 14:32:13.174241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.186 [2024-11-25 14:32:13.174257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.186 [2024-11-25 14:32:13.181990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f8618 00:34:08.186 [2024-11-25 14:32:13.182705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.186 [2024-11-25 14:32:13.182721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.186 [2024-11-25 14:32:13.190460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f96f8 00:34:08.186 [2024-11-25 14:32:13.191169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.186 [2024-11-25 14:32:13.191185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.186 [2024-11-25 14:32:13.198900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fb048 00:34:08.186 [2024-11-25 14:32:13.199572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.186 [2024-11-25 14:32:13.199588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.186 [2024-11-25 14:32:13.207412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ed4e8 00:34:08.186 [2024-11-25 14:32:13.208098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.186 [2024-11-25 14:32:13.208114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.186 [2024-11-25 14:32:13.215829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fbcf0 00:34:08.186 [2024-11-25 14:32:13.216524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.186 [2024-11-25 14:32:13.216540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.186 [2024-11-25 14:32:13.224265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ff3c8 00:34:08.186 [2024-11-25 14:32:13.224968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.186 [2024-11-25 14:32:13.224987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.186 [2024-11-25 14:32:13.232700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f4f40 00:34:08.186 [2024-11-25 14:32:13.233378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.186 [2024-11-25 14:32:13.233395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.186 [2024-11-25 14:32:13.241130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e8088 00:34:08.186 [2024-11-25 14:32:13.241823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.186 [2024-11-25 14:32:13.241839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.186 [2024-11-25 14:32:13.249563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e6738 00:34:08.186 [2024-11-25 14:32:13.250263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.186 [2024-11-25 14:32:13.250279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.186 [2024-11-25 14:32:13.257983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e0ea0 00:34:08.186 [2024-11-25 14:32:13.258677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.186 [2024-11-25 14:32:13.258693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.186 [2024-11-25 14:32:13.266428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166eea00 00:34:08.186 [2024-11-25 14:32:13.267131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.186 [2024-11-25 14:32:13.267147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.447 [2024-11-25 14:32:13.274916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166eee38 00:34:08.447 [2024-11-25 14:32:13.275611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.447 [2024-11-25 14:32:13.275626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.447 [2024-11-25 14:32:13.283392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f6890 00:34:08.447 [2024-11-25 14:32:13.284101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.447 [2024-11-25 14:32:13.284116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.447 [2024-11-25 14:32:13.291820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f7da8 00:34:08.447 [2024-11-25 14:32:13.292531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.447 [2024-11-25 14:32:13.292547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.447 [2024-11-25 14:32:13.300239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e2c28 00:34:08.447 [2024-11-25 14:32:13.300893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.447 [2024-11-25 14:32:13.300908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.447 [2024-11-25 14:32:13.308671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fac10 00:34:08.447 [2024-11-25 14:32:13.309369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.447 [2024-11-25 14:32:13.309385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.447 [2024-11-25 14:32:13.317108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166de8a8 00:34:08.447 [2024-11-25 14:32:13.317800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.447 [2024-11-25 14:32:13.317815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.447 [2024-11-25 14:32:13.325559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f8a50 00:34:08.447 [2024-11-25 14:32:13.326224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.447 [2024-11-25 14:32:13.326240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.447 [2024-11-25 14:32:13.334008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f9b30 00:34:08.447 [2024-11-25 14:32:13.334707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.447 [2024-11-25 14:32:13.334723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.448 [2024-11-25 14:32:13.342429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fb480 00:34:08.448 [2024-11-25 14:32:13.343128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.448 [2024-11-25 14:32:13.343144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.448 [2024-11-25 14:32:13.350851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ee190 00:34:08.448 [2024-11-25 14:32:13.351555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.448 [2024-11-25 14:32:13.351571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.448 [2024-11-25 14:32:13.359279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166fc128 00:34:08.448 [2024-11-25 14:32:13.359972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.448 [2024-11-25 14:32:13.359988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.448 [2024-11-25 14:32:13.367710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166feb58 00:34:08.448 [2024-11-25 14:32:13.368371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.448 [2024-11-25 14:32:13.368386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.448 [2024-11-25 14:32:13.376138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f5378 00:34:08.448 [2024-11-25 14:32:13.376812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.448 [2024-11-25 14:32:13.376828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.448 [2024-11-25 14:32:13.384569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e8d30 00:34:08.448 [2024-11-25 14:32:13.385257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.448 [2024-11-25 14:32:13.385273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.448 [2024-11-25 14:32:13.392988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166edd58 00:34:08.448 [2024-11-25 14:32:13.393639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.448 [2024-11-25 14:32:13.393654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.448 [2024-11-25 14:32:13.401427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e0a68 00:34:08.448 [2024-11-25 14:32:13.402113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.448 [2024-11-25 14:32:13.402129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.448 [2024-11-25 14:32:13.409869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166ee5c8 00:34:08.448 [2024-11-25 14:32:13.410568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.448 [2024-11-25 14:32:13.410584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.448 [2024-11-25 14:32:13.418296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166f7970 00:34:08.448 [2024-11-25 14:32:13.418997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.448 [2024-11-25 14:32:13.419012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.448 30149.00 IOPS, 117.77 MiB/s [2024-11-25T13:32:13.538Z] [2024-11-25 14:32:13.426703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe520) with pdu=0x2000166e1f80 00:34:08.448 [2024-11-25 14:32:13.427377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.448 [2024-11-25 14:32:13.427392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:08.448 00:34:08.448 Latency(us) 00:34:08.448 [2024-11-25T13:32:13.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:08.448 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:34:08.448 nvme0n1 : 2.01 30148.94 117.77 0.00 0.00 4240.22 2252.80 14964.05 00:34:08.448 [2024-11-25T13:32:13.538Z] =================================================================================================================== 00:34:08.448 [2024-11-25T13:32:13.538Z] Total : 30148.94 117.77 0.00 0.00 4240.22 2252.80 14964.05 00:34:08.448 { 00:34:08.448 "results": [ 00:34:08.448 { 00:34:08.448 "job": "nvme0n1", 00:34:08.448 "core_mask": "0x2", 00:34:08.448 "workload": "randwrite", 00:34:08.448 "status": "finished", 00:34:08.448 "queue_depth": 128, 00:34:08.448 "io_size": 4096, 00:34:08.448 "runtime": 2.006339, 00:34:08.448 "iops": 30148.94292539795, 00:34:08.448 "mibps": 117.76930830233574, 00:34:08.448 "io_failed": 0, 00:34:08.448 "io_timeout": 0, 00:34:08.448 "avg_latency_us": 4240.221445001019, 00:34:08.448 "min_latency_us": 2252.8, 00:34:08.448 "max_latency_us": 14964.053333333333 00:34:08.448 } 00:34:08.448 ], 00:34:08.448 "core_count": 1 00:34:08.448 } 00:34:08.448 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:08.448 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:08.448 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:08.448 | .driver_specific 00:34:08.448 | .nvme_error 00:34:08.448 | .status_code 00:34:08.448 | .command_transient_transport_error' 00:34:08.448 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:08.709 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 237 > 0 )) 00:34:08.709 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3611009 00:34:08.709 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3611009 ']' 00:34:08.709 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3611009 00:34:08.709 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:34:08.709 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:08.709 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3611009 00:34:08.709 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:08.709 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:08.709 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3611009' 00:34:08.709 killing process with pid 3611009 00:34:08.709 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3611009 00:34:08.709 Received shutdown signal, test time was about 2.000000 seconds 00:34:08.709 00:34:08.709 Latency(us) 00:34:08.709 [2024-11-25T13:32:13.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:08.709 [2024-11-25T13:32:13.799Z] =================================================================================================================== 00:34:08.709 [2024-11-25T13:32:13.799Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:08.709 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3611009 00:34:08.970 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:34:08.970 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:34:08.970 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:34:08.970 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:34:08.970 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:34:08.970 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3611694 00:34:08.970 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3611694 /var/tmp/bperf.sock 00:34:08.970 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3611694 ']' 00:34:08.970 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:34:08.970 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:34:08.970 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:08.970 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:34:08.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:34:08.970 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:08.970 14:32:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:08.970 [2024-11-25 14:32:13.857098] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:34:08.970 [2024-11-25 14:32:13.857180] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3611694 ] 00:34:08.970 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:08.970 Zero copy mechanism will not be used. 00:34:08.970 [2024-11-25 14:32:13.943283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:08.970 [2024-11-25 14:32:13.972794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:09.911 14:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:09.911 14:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:34:09.911 14:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:09.911 14:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:34:09.911 14:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:34:09.911 14:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.911 14:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:09.911 14:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.911 14:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:09.911 14:32:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:34:10.172 nvme0n1 00:34:10.172 14:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:34:10.172 14:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.172 14:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:10.172 14:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.172 14:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:34:10.172 14:32:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:34:10.172 I/O size of 131072 is greater than zero copy threshold (65536). 00:34:10.172 Zero copy mechanism will not be used. 00:34:10.172 Running I/O for 2 seconds... 00:34:10.172 [2024-11-25 14:32:15.125306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.172 [2024-11-25 14:32:15.125547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.172 [2024-11-25 14:32:15.125575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.172 [2024-11-25 14:32:15.135662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.172 [2024-11-25 14:32:15.135914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.172 [2024-11-25 14:32:15.135932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.172 [2024-11-25 14:32:15.147032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.172 [2024-11-25 14:32:15.147296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.172 [2024-11-25 14:32:15.147313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.172 [2024-11-25 14:32:15.157081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.172 [2024-11-25 14:32:15.157319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.172 [2024-11-25 14:32:15.157335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.172 [2024-11-25 14:32:15.167315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.172 [2024-11-25 14:32:15.167505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.172 [2024-11-25 14:32:15.167521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.172 [2024-11-25 14:32:15.178201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.172 [2024-11-25 14:32:15.178480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.172 [2024-11-25 14:32:15.178498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.172 [2024-11-25 14:32:15.188585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.172 [2024-11-25 14:32:15.188844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.172 [2024-11-25 14:32:15.188860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.172 [2024-11-25 14:32:15.199107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.172 [2024-11-25 14:32:15.199373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.172 [2024-11-25 14:32:15.199389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.172 [2024-11-25 14:32:15.209970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.172 [2024-11-25 14:32:15.210238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.172 [2024-11-25 14:32:15.210256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.172 [2024-11-25 14:32:15.220915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.172 [2024-11-25 14:32:15.221254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.172 [2024-11-25 14:32:15.221271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.172 [2024-11-25 14:32:15.231896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.172 [2024-11-25 14:32:15.232175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.172 [2024-11-25 14:32:15.232192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.172 [2024-11-25 14:32:15.242130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.172 [2024-11-25 14:32:15.242493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.172 [2024-11-25 14:32:15.242509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.172 [2024-11-25 14:32:15.252719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.172 [2024-11-25 14:32:15.253098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.172 [2024-11-25 14:32:15.253116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.434 [2024-11-25 14:32:15.263232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.434 [2024-11-25 14:32:15.263569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.434 [2024-11-25 14:32:15.263586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.434 [2024-11-25 14:32:15.271530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.434 [2024-11-25 14:32:15.271773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.271789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.281750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.435 [2024-11-25 14:32:15.282044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.282061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.291592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.435 [2024-11-25 14:32:15.291866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.291882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.301557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.435 [2024-11-25 14:32:15.301794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.301810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.311980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.435 [2024-11-25 14:32:15.312178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.312194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.317142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.435 [2024-11-25 14:32:15.317320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.317337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.321355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.435 [2024-11-25 14:32:15.321523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.321540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.325177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.435 [2024-11-25 14:32:15.325349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.325365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.328876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.435 [2024-11-25 14:32:15.328972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.328988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.334535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.435 [2024-11-25 14:32:15.334694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.334710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.339792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.435 [2024-11-25 14:32:15.340092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.340109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.348006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.435 [2024-11-25 14:32:15.348067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.348082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.356586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.435 [2024-11-25 14:32:15.356813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.356831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.362894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.435 [2024-11-25 14:32:15.362944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.362959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.367297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.435 [2024-11-25 14:32:15.367343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.367358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.373104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.435 [2024-11-25 14:32:15.373173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.373188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.377610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.435 [2024-11-25 14:32:15.377655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.377671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.384927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.435 [2024-11-25 14:32:15.384971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.384987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.389051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.435 [2024-11-25 14:32:15.389096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.389112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.393355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.435 [2024-11-25 14:32:15.393414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.393430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.399515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.435 [2024-11-25 14:32:15.399573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.399588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.404153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.435 [2024-11-25 14:32:15.404209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.404224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.410721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.435 [2024-11-25 14:32:15.410771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.410786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.417877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.435 [2024-11-25 14:32:15.417941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.417956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.423813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.435 [2024-11-25 14:32:15.424097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.424112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.427773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.435 [2024-11-25 14:32:15.427820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.427835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.432380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.435 [2024-11-25 14:32:15.432423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.435 [2024-11-25 14:32:15.432438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.435 [2024-11-25 14:32:15.438257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.436 [2024-11-25 14:32:15.438319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.436 [2024-11-25 14:32:15.438334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.436 [2024-11-25 14:32:15.443855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.436 [2024-11-25 14:32:15.443897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.436 [2024-11-25 14:32:15.443912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.436 [2024-11-25 14:32:15.447964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.436 [2024-11-25 14:32:15.448033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.436 [2024-11-25 14:32:15.448048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.436 [2024-11-25 14:32:15.452131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.436 [2024-11-25 14:32:15.452186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.436 [2024-11-25 14:32:15.452202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.436 [2024-11-25 14:32:15.460223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.436 [2024-11-25 14:32:15.460269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.436 [2024-11-25 14:32:15.460284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.436 [2024-11-25 14:32:15.466441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.436 [2024-11-25 14:32:15.466485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.436 [2024-11-25 14:32:15.466501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.436 [2024-11-25 14:32:15.471724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.436 [2024-11-25 14:32:15.471958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.436 [2024-11-25 14:32:15.471973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.436 [2024-11-25 14:32:15.481409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.436 [2024-11-25 14:32:15.481461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.436 [2024-11-25 14:32:15.481476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.436 [2024-11-25 14:32:15.491813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.436 [2024-11-25 14:32:15.492043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.436 [2024-11-25 14:32:15.492058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.436 [2024-11-25 14:32:15.502864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.436 [2024-11-25 14:32:15.503192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.436 [2024-11-25 14:32:15.503208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.436 [2024-11-25 14:32:15.513833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.436 [2024-11-25 14:32:15.514105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.436 [2024-11-25 14:32:15.514121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.698 [2024-11-25 14:32:15.523702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.698 [2024-11-25 14:32:15.523962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.698 [2024-11-25 14:32:15.523980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.698 [2024-11-25 14:32:15.534237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.698 [2024-11-25 14:32:15.534292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.698 [2024-11-25 14:32:15.534307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.698 [2024-11-25 14:32:15.544123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.698 [2024-11-25 14:32:15.544390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.698 [2024-11-25 14:32:15.544405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.698 [2024-11-25 14:32:15.554080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.698 [2024-11-25 14:32:15.554319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.698 [2024-11-25 14:32:15.554334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.698 [2024-11-25 14:32:15.562047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.698 [2024-11-25 14:32:15.562121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.698 [2024-11-25 14:32:15.562136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.698 [2024-11-25 14:32:15.565545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.698 [2024-11-25 14:32:15.565598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.698 [2024-11-25 14:32:15.565614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.698 [2024-11-25 14:32:15.568615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.698 [2024-11-25 14:32:15.568675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.698 [2024-11-25 14:32:15.568690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.698 [2024-11-25 14:32:15.571673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.698 [2024-11-25 14:32:15.571736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.698 [2024-11-25 14:32:15.571751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.698 [2024-11-25 14:32:15.575057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.698 [2024-11-25 14:32:15.575107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.698 [2024-11-25 14:32:15.575122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.698 [2024-11-25 14:32:15.578635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.698 [2024-11-25 14:32:15.578706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.698 [2024-11-25 14:32:15.578721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.698 [2024-11-25 14:32:15.583040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.698 [2024-11-25 14:32:15.583100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.698 [2024-11-25 14:32:15.583115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.698 [2024-11-25 14:32:15.587326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.698 [2024-11-25 14:32:15.587370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.698 [2024-11-25 14:32:15.587385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.698 [2024-11-25 14:32:15.590831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.698 [2024-11-25 14:32:15.590889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.698 [2024-11-25 14:32:15.590904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.698 [2024-11-25 14:32:15.594033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.698 [2024-11-25 14:32:15.594090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.698 [2024-11-25 14:32:15.594105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.698 [2024-11-25 14:32:15.597260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.698 [2024-11-25 14:32:15.597317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.698 [2024-11-25 14:32:15.597332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.698 [2024-11-25 14:32:15.600532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.698 [2024-11-25 14:32:15.600593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.698 [2024-11-25 14:32:15.600608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.698 [2024-11-25 14:32:15.604092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.698 [2024-11-25 14:32:15.604137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.698 [2024-11-25 14:32:15.604151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.698 [2024-11-25 14:32:15.607273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.698 [2024-11-25 14:32:15.607321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.698 [2024-11-25 14:32:15.607336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.699 [2024-11-25 14:32:15.610593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.699 [2024-11-25 14:32:15.610642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.699 [2024-11-25 14:32:15.610657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.699 [2024-11-25 14:32:15.617913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.699 [2024-11-25 14:32:15.617978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.699 [2024-11-25 14:32:15.617993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.699 [2024-11-25 14:32:15.622963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.699 [2024-11-25 14:32:15.623021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.699 [2024-11-25 14:32:15.623036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.699 [2024-11-25 14:32:15.626257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.699 [2024-11-25 14:32:15.626301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.699 [2024-11-25 14:32:15.626316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.699 [2024-11-25 14:32:15.629875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.699 [2024-11-25 14:32:15.629962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.699 [2024-11-25 14:32:15.629977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.699 [2024-11-25 14:32:15.633315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.699 [2024-11-25 14:32:15.633359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.699 [2024-11-25 14:32:15.633374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.699 [2024-11-25 14:32:15.636746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.699 [2024-11-25 14:32:15.636805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.699 [2024-11-25 14:32:15.636820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.699 [2024-11-25 14:32:15.641754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.699 [2024-11-25 14:32:15.641798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.699 [2024-11-25 14:32:15.641814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.699 [2024-11-25 14:32:15.647490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.699 [2024-11-25 14:32:15.647586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.699 [2024-11-25 14:32:15.647605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.699 [2024-11-25 14:32:15.653624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.699 [2024-11-25 14:32:15.653825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.699 [2024-11-25 14:32:15.653841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.699 [2024-11-25 14:32:15.660881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.699 [2024-11-25 14:32:15.661176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.699 [2024-11-25 14:32:15.661191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.699 [2024-11-25 14:32:15.664906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.699 [2024-11-25 14:32:15.664949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.699 [2024-11-25 14:32:15.664964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.699 [2024-11-25 14:32:15.668002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.699 [2024-11-25 14:32:15.668048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.699 [2024-11-25 14:32:15.668064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.699 [2024-11-25 14:32:15.671047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.699 [2024-11-25 14:32:15.671091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.699 [2024-11-25 14:32:15.671106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.699 [2024-11-25 14:32:15.674092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.699 [2024-11-25 14:32:15.674146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.699 [2024-11-25 14:32:15.674165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.699 [2024-11-25 14:32:15.677235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.699 [2024-11-25 14:32:15.677280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.699 [2024-11-25 14:32:15.677295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.699 [2024-11-25 14:32:15.680237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.699 [2024-11-25 14:32:15.680283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.699 [2024-11-25 14:32:15.680297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.699 [2024-11-25 14:32:15.683177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.699 [2024-11-25 14:32:15.683223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.699 [2024-11-25 14:32:15.683238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.699 [2024-11-25 14:32:15.685888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.699 [2024-11-25 14:32:15.685929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.699 [2024-11-25 14:32:15.685944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.699 [2024-11-25 14:32:15.688814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.699 [2024-11-25 14:32:15.688857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.699 [2024-11-25 14:32:15.688872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.699 [2024-11-25 14:32:15.691563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.699 [2024-11-25 14:32:15.691606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.699 [2024-11-25 14:32:15.691622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.699 [2024-11-25 14:32:15.694544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.699 [2024-11-25 14:32:15.694596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.699 [2024-11-25 14:32:15.694611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.699 [2024-11-25 14:32:15.697956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.699 [2024-11-25 14:32:15.698061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.699 [2024-11-25 14:32:15.698076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.699 [2024-11-25 14:32:15.702655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.699 [2024-11-25 14:32:15.702699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.699 [2024-11-25 14:32:15.702715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.700 [2024-11-25 14:32:15.707055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.700 [2024-11-25 14:32:15.707104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.700 [2024-11-25 14:32:15.707119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.700 [2024-11-25 14:32:15.709762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.700 [2024-11-25 14:32:15.709803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.700 [2024-11-25 14:32:15.709819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.700 [2024-11-25 14:32:15.712423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.700 [2024-11-25 14:32:15.712470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.700 [2024-11-25 14:32:15.712485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.700 [2024-11-25 14:32:15.715171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.700 [2024-11-25 14:32:15.715216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.700 [2024-11-25 14:32:15.715231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.700 [2024-11-25 14:32:15.719511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.700 [2024-11-25 14:32:15.719567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.700 [2024-11-25 14:32:15.719582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.700 [2024-11-25 14:32:15.724508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.700 [2024-11-25 14:32:15.724549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.700 [2024-11-25 14:32:15.724563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.700 [2024-11-25 14:32:15.727561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.700 [2024-11-25 14:32:15.727624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.700 [2024-11-25 14:32:15.727639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.700 [2024-11-25 14:32:15.730821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.700 [2024-11-25 14:32:15.730866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.700 [2024-11-25 14:32:15.730881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.700 [2024-11-25 14:32:15.734129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.700 [2024-11-25 14:32:15.734205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.700 [2024-11-25 14:32:15.734220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.700 [2024-11-25 14:32:15.737189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.700 [2024-11-25 14:32:15.737232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.700 [2024-11-25 14:32:15.737247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.700 [2024-11-25 14:32:15.740046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.700 [2024-11-25 14:32:15.740089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.700 [2024-11-25 14:32:15.740108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.700 [2024-11-25 14:32:15.744141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.700 [2024-11-25 14:32:15.744206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.700 [2024-11-25 14:32:15.744221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.700 [2024-11-25 14:32:15.747661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.700 [2024-11-25 14:32:15.747914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.700 [2024-11-25 14:32:15.747929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.700 [2024-11-25 14:32:15.750792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.700 [2024-11-25 14:32:15.750881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.700 [2024-11-25 14:32:15.750896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.700 [2024-11-25 14:32:15.754475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.700 [2024-11-25 14:32:15.754570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.700 [2024-11-25 14:32:15.754585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.700 [2024-11-25 14:32:15.757706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.700 [2024-11-25 14:32:15.757771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.700 [2024-11-25 14:32:15.757786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.700 [2024-11-25 14:32:15.762781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.700 [2024-11-25 14:32:15.763082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.700 [2024-11-25 14:32:15.763099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.700 [2024-11-25 14:32:15.772917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.700 [2024-11-25 14:32:15.773248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.700 [2024-11-25 14:32:15.773265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.700 [2024-11-25 14:32:15.783115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.700 [2024-11-25 14:32:15.783409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.700 [2024-11-25 14:32:15.783425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.963 [2024-11-25 14:32:15.793394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.963 [2024-11-25 14:32:15.793730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.963 [2024-11-25 14:32:15.793746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.963 [2024-11-25 14:32:15.803098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.963 [2024-11-25 14:32:15.803342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.963 [2024-11-25 14:32:15.803357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.963 [2024-11-25 14:32:15.813946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.963 [2024-11-25 14:32:15.814084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.963 [2024-11-25 14:32:15.814099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.963 [2024-11-25 14:32:15.824514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.963 [2024-11-25 14:32:15.824702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.963 [2024-11-25 14:32:15.824717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.963 [2024-11-25 14:32:15.835792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.963 [2024-11-25 14:32:15.835873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.963 [2024-11-25 14:32:15.835888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.963 [2024-11-25 14:32:15.845583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.963 [2024-11-25 14:32:15.845816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.963 [2024-11-25 14:32:15.845830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.963 [2024-11-25 14:32:15.853020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.963 [2024-11-25 14:32:15.853138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.963 [2024-11-25 14:32:15.853153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.963 [2024-11-25 14:32:15.862607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.963 [2024-11-25 14:32:15.862894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.963 [2024-11-25 14:32:15.862910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.963 [2024-11-25 14:32:15.867031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.963 [2024-11-25 14:32:15.867090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.963 [2024-11-25 14:32:15.867106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.963 [2024-11-25 14:32:15.871388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.963 [2024-11-25 14:32:15.871434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.963 [2024-11-25 14:32:15.871449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.963 [2024-11-25 14:32:15.877325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.963 [2024-11-25 14:32:15.877369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.963 [2024-11-25 14:32:15.877383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.963 [2024-11-25 14:32:15.884532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.963 [2024-11-25 14:32:15.884576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.963 [2024-11-25 14:32:15.884592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.963 [2024-11-25 14:32:15.890565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.963 [2024-11-25 14:32:15.890610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.963 [2024-11-25 14:32:15.890626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.963 [2024-11-25 14:32:15.897346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.963 [2024-11-25 14:32:15.897396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.963 [2024-11-25 14:32:15.897411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.963 [2024-11-25 14:32:15.904737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.963 [2024-11-25 14:32:15.904787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.963 [2024-11-25 14:32:15.904803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.963 [2024-11-25 14:32:15.909300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.963 [2024-11-25 14:32:15.909583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.963 [2024-11-25 14:32:15.909599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.963 [2024-11-25 14:32:15.914188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.963 [2024-11-25 14:32:15.914237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.963 [2024-11-25 14:32:15.914252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.963 [2024-11-25 14:32:15.918238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.963 [2024-11-25 14:32:15.918282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.963 [2024-11-25 14:32:15.918305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.963 [2024-11-25 14:32:15.921600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.963 [2024-11-25 14:32:15.921663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.963 [2024-11-25 14:32:15.921678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.963 [2024-11-25 14:32:15.924665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.963 [2024-11-25 14:32:15.924706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.963 [2024-11-25 14:32:15.924721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.963 [2024-11-25 14:32:15.927894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.963 [2024-11-25 14:32:15.927958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.963 [2024-11-25 14:32:15.927973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.963 [2024-11-25 14:32:15.930958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.963 [2024-11-25 14:32:15.931007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.963 [2024-11-25 14:32:15.931022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.963 [2024-11-25 14:32:15.934097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.963 [2024-11-25 14:32:15.934144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.963 [2024-11-25 14:32:15.934164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.963 [2024-11-25 14:32:15.936988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.963 [2024-11-25 14:32:15.937032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.963 [2024-11-25 14:32:15.937047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:15.939831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:15.939882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:15.939897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:15.942543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:15.942591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:15.942606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:15.945359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:15.945409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:15.945424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:15.948106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:15.948162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:15.948178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:15.950758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:15.950809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:15.950825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:15.953424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:15.953469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:15.953484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:15.956034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:15.956086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:15.956101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:15.960334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:15.960377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:15.960392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:15.965288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:15.965362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:15.965377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:15.968001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:15.968045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:15.968061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:15.970548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:15.970592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:15.970607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:15.973093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:15.973137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:15.973152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:15.975619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:15.975666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:15.975681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:15.978140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:15.978196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:15.978211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:15.980677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:15.980727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:15.980742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:15.983225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:15.983273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:15.983288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:15.985740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:15.985791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:15.985806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:15.988260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:15.988318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:15.988333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:15.990831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:15.990891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:15.990906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:15.993401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:15.993454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:15.993472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:15.996316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:15.996358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:15.996373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:16.000673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:16.000716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:16.000731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:16.007559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:16.007604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:16.007619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:16.014674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:16.014733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:16.014748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:16.020483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:16.020529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:16.020545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:16.024884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:16.024927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:16.024943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:10.964 [2024-11-25 14:32:16.030766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.964 [2024-11-25 14:32:16.031018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.964 [2024-11-25 14:32:16.031034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:10.965 [2024-11-25 14:32:16.036534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.965 [2024-11-25 14:32:16.036611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.965 [2024-11-25 14:32:16.036626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:10.965 [2024-11-25 14:32:16.043362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.965 [2024-11-25 14:32:16.043646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.965 [2024-11-25 14:32:16.043661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:10.965 [2024-11-25 14:32:16.048390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:10.965 [2024-11-25 14:32:16.048439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:10.965 [2024-11-25 14:32:16.048453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.227 [2024-11-25 14:32:16.052642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.227 [2024-11-25 14:32:16.052695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.227 [2024-11-25 14:32:16.052709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.227 [2024-11-25 14:32:16.055955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.227 [2024-11-25 14:32:16.056026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.227 [2024-11-25 14:32:16.056041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.227 [2024-11-25 14:32:16.059115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.227 [2024-11-25 14:32:16.059165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.227 [2024-11-25 14:32:16.059180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.227 [2024-11-25 14:32:16.062851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.227 [2024-11-25 14:32:16.062902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.227 [2024-11-25 14:32:16.062918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.227 [2024-11-25 14:32:16.066664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.227 [2024-11-25 14:32:16.066748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.227 [2024-11-25 14:32:16.066764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.227 [2024-11-25 14:32:16.070639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.227 [2024-11-25 14:32:16.070684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.227 [2024-11-25 14:32:16.070700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.227 [2024-11-25 14:32:16.074559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.227 [2024-11-25 14:32:16.074620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.227 [2024-11-25 14:32:16.074636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.227 [2024-11-25 14:32:16.077510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.227 [2024-11-25 14:32:16.077563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.227 [2024-11-25 14:32:16.077578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.227 [2024-11-25 14:32:16.080925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.227 [2024-11-25 14:32:16.080987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.227 [2024-11-25 14:32:16.081002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.227 [2024-11-25 14:32:16.086695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.227 [2024-11-25 14:32:16.086766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.227 [2024-11-25 14:32:16.086781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.227 [2024-11-25 14:32:16.089972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.227 [2024-11-25 14:32:16.090029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.227 [2024-11-25 14:32:16.090044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.227 [2024-11-25 14:32:16.093711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.227 [2024-11-25 14:32:16.093780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.227 [2024-11-25 14:32:16.093795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.227 [2024-11-25 14:32:16.101419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.227 [2024-11-25 14:32:16.101821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.227 [2024-11-25 14:32:16.101837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.227 [2024-11-25 14:32:16.106600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.227 [2024-11-25 14:32:16.106647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.227 [2024-11-25 14:32:16.106661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.227 [2024-11-25 14:32:16.109967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.227 [2024-11-25 14:32:16.110010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.227 [2024-11-25 14:32:16.110025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.227 [2024-11-25 14:32:16.113573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.227 [2024-11-25 14:32:16.113626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.227 [2024-11-25 14:32:16.113644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.227 [2024-11-25 14:32:16.116929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.227 [2024-11-25 14:32:16.116975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.227 [2024-11-25 14:32:16.116991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.227 [2024-11-25 14:32:16.120267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.227 [2024-11-25 14:32:16.120325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.227 [2024-11-25 14:32:16.120340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.227 [2024-11-25 14:32:16.123587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.227 [2024-11-25 14:32:16.123632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.227 [2024-11-25 14:32:16.123647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.227 5753.00 IOPS, 719.12 MiB/s [2024-11-25T13:32:16.317Z] [2024-11-25 14:32:16.127808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.227 [2024-11-25 14:32:16.127873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.227 [2024-11-25 14:32:16.127888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.227 [2024-11-25 14:32:16.130722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.227 [2024-11-25 14:32:16.130788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.227 [2024-11-25 14:32:16.130803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.227 [2024-11-25 14:32:16.133440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.227 [2024-11-25 14:32:16.133485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.227 [2024-11-25 14:32:16.133500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.227 [2024-11-25 14:32:16.138151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.227 [2024-11-25 14:32:16.138205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.227 [2024-11-25 14:32:16.138220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.227 [2024-11-25 14:32:16.142693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.227 [2024-11-25 14:32:16.142767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.227 [2024-11-25 14:32:16.142782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.227 [2024-11-25 14:32:16.145248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.227 [2024-11-25 14:32:16.145305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.145320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.147845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.228 [2024-11-25 14:32:16.147887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.147902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.150362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.228 [2024-11-25 14:32:16.150414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.150429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.152911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.228 [2024-11-25 14:32:16.152962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.152977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.155479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.228 [2024-11-25 14:32:16.155525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.155540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.157974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.228 [2024-11-25 14:32:16.158022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.158036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.160505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.228 [2024-11-25 14:32:16.160560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.160575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.163031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.228 [2024-11-25 14:32:16.163084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.163099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.165589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.228 [2024-11-25 14:32:16.165657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.165672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.168770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.228 [2024-11-25 14:32:16.168816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.168830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.172477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.228 [2024-11-25 14:32:16.172542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.172557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.175515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.228 [2024-11-25 14:32:16.175588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.175602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.178357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.228 [2024-11-25 14:32:16.178544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.178560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.181034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.228 [2024-11-25 14:32:16.181088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.181103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.183516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.228 [2024-11-25 14:32:16.183587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.183603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.185964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.228 [2024-11-25 14:32:16.186020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.186035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.188457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.228 [2024-11-25 14:32:16.188511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.188526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.190970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.228 [2024-11-25 14:32:16.191022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.191040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.193415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.228 [2024-11-25 14:32:16.193475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.193490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.196082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.228 [2024-11-25 14:32:16.196194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.196209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.199451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.228 [2024-11-25 14:32:16.199714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.199730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.210101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.228 [2024-11-25 14:32:16.210319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.210334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.220203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.228 [2024-11-25 14:32:16.220445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.220461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.230637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.228 [2024-11-25 14:32:16.230850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.230865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.241941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.228 [2024-11-25 14:32:16.242189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.242205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.252156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.228 [2024-11-25 14:32:16.252471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.252487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.263369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.228 [2024-11-25 14:32:16.263669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.228 [2024-11-25 14:32:16.263685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.228 [2024-11-25 14:32:16.274441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.229 [2024-11-25 14:32:16.274733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.229 [2024-11-25 14:32:16.274749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.229 [2024-11-25 14:32:16.284858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.229 [2024-11-25 14:32:16.285017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.229 [2024-11-25 14:32:16.285033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.229 [2024-11-25 14:32:16.295889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.229 [2024-11-25 14:32:16.296139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.229 [2024-11-25 14:32:16.296154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.229 [2024-11-25 14:32:16.306631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.229 [2024-11-25 14:32:16.306862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.229 [2024-11-25 14:32:16.306877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.491 [2024-11-25 14:32:16.316323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.491 [2024-11-25 14:32:16.316599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.491 [2024-11-25 14:32:16.316614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.491 [2024-11-25 14:32:16.326571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.491 [2024-11-25 14:32:16.326829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.491 [2024-11-25 14:32:16.326852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.491 [2024-11-25 14:32:16.337530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.491 [2024-11-25 14:32:16.337821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.491 [2024-11-25 14:32:16.337837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.491 [2024-11-25 14:32:16.348356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.491 [2024-11-25 14:32:16.348638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.491 [2024-11-25 14:32:16.348654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.491 [2024-11-25 14:32:16.358578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.491 [2024-11-25 14:32:16.358826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.491 [2024-11-25 14:32:16.358841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.491 [2024-11-25 14:32:16.369237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.491 [2024-11-25 14:32:16.369512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.491 [2024-11-25 14:32:16.369529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.491 [2024-11-25 14:32:16.379668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.491 [2024-11-25 14:32:16.379903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.491 [2024-11-25 14:32:16.379918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.491 [2024-11-25 14:32:16.389258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.491 [2024-11-25 14:32:16.389513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.491 [2024-11-25 14:32:16.389528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.491 [2024-11-25 14:32:16.400621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.491 [2024-11-25 14:32:16.400934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.491 [2024-11-25 14:32:16.400950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.491 [2024-11-25 14:32:16.410391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.491 [2024-11-25 14:32:16.410653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.491 [2024-11-25 14:32:16.410669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.491 [2024-11-25 14:32:16.420622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.491 [2024-11-25 14:32:16.420833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.491 [2024-11-25 14:32:16.420848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.491 [2024-11-25 14:32:16.431203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.491 [2024-11-25 14:32:16.431476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.491 [2024-11-25 14:32:16.431493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.491 [2024-11-25 14:32:16.439694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.491 [2024-11-25 14:32:16.439964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.491 [2024-11-25 14:32:16.439983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.491 [2024-11-25 14:32:16.446043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.491 [2024-11-25 14:32:16.446337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.491 [2024-11-25 14:32:16.446353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.491 [2024-11-25 14:32:16.455201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.491 [2024-11-25 14:32:16.455292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.491 [2024-11-25 14:32:16.455308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.491 [2024-11-25 14:32:16.458797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.491 [2024-11-25 14:32:16.458869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.491 [2024-11-25 14:32:16.458884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.491 [2024-11-25 14:32:16.462305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.491 [2024-11-25 14:32:16.462455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.491 [2024-11-25 14:32:16.462470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.491 [2024-11-25 14:32:16.470401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.491 [2024-11-25 14:32:16.470507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.491 [2024-11-25 14:32:16.470523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.491 [2024-11-25 14:32:16.479205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.491 [2024-11-25 14:32:16.479264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.491 [2024-11-25 14:32:16.479279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.491 [2024-11-25 14:32:16.485369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.491 [2024-11-25 14:32:16.485472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.491 [2024-11-25 14:32:16.485487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.491 [2024-11-25 14:32:16.492956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.491 [2024-11-25 14:32:16.493038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.491 [2024-11-25 14:32:16.493053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.491 [2024-11-25 14:32:16.495984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.491 [2024-11-25 14:32:16.496067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.492 [2024-11-25 14:32:16.496082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.492 [2024-11-25 14:32:16.499034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.492 [2024-11-25 14:32:16.499097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.492 [2024-11-25 14:32:16.499111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.492 [2024-11-25 14:32:16.502368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.492 [2024-11-25 14:32:16.502423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.492 [2024-11-25 14:32:16.502438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.492 [2024-11-25 14:32:16.509649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.492 [2024-11-25 14:32:16.509839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.492 [2024-11-25 14:32:16.509853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.492 [2024-11-25 14:32:16.513386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.492 [2024-11-25 14:32:16.513472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.492 [2024-11-25 14:32:16.513487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.492 [2024-11-25 14:32:16.518722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.492 [2024-11-25 14:32:16.518775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.492 [2024-11-25 14:32:16.518791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.492 [2024-11-25 14:32:16.521768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.492 [2024-11-25 14:32:16.521815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.492 [2024-11-25 14:32:16.521830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.492 [2024-11-25 14:32:16.524635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.492 [2024-11-25 14:32:16.524702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.492 [2024-11-25 14:32:16.524716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.492 [2024-11-25 14:32:16.527378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.492 [2024-11-25 14:32:16.527462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.492 [2024-11-25 14:32:16.527477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.492 [2024-11-25 14:32:16.530003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.492 [2024-11-25 14:32:16.530065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.492 [2024-11-25 14:32:16.530080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.492 [2024-11-25 14:32:16.534816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.492 [2024-11-25 14:32:16.534886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.492 [2024-11-25 14:32:16.534901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.492 [2024-11-25 14:32:16.540049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.492 [2024-11-25 14:32:16.540296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.492 [2024-11-25 14:32:16.540312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.492 [2024-11-25 14:32:16.547701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.492 [2024-11-25 14:32:16.548000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.492 [2024-11-25 14:32:16.548017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.492 [2024-11-25 14:32:16.555888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.492 [2024-11-25 14:32:16.556245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.492 [2024-11-25 14:32:16.556262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.492 [2024-11-25 14:32:16.561389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.492 [2024-11-25 14:32:16.561511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.492 [2024-11-25 14:32:16.561527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.492 [2024-11-25 14:32:16.564982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.492 [2024-11-25 14:32:16.565082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.492 [2024-11-25 14:32:16.565097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.492 [2024-11-25 14:32:16.574532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.492 [2024-11-25 14:32:16.574737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.492 [2024-11-25 14:32:16.574752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.753 [2024-11-25 14:32:16.580340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.753 [2024-11-25 14:32:16.580561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.753 [2024-11-25 14:32:16.580582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.753 [2024-11-25 14:32:16.590226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.753 [2024-11-25 14:32:16.590312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.753 [2024-11-25 14:32:16.590327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.753 [2024-11-25 14:32:16.593065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.753 [2024-11-25 14:32:16.593144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.753 [2024-11-25 14:32:16.593165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.753 [2024-11-25 14:32:16.595866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.753 [2024-11-25 14:32:16.595944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.753 [2024-11-25 14:32:16.595959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.753 [2024-11-25 14:32:16.598589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.753 [2024-11-25 14:32:16.598667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.753 [2024-11-25 14:32:16.598682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.753 [2024-11-25 14:32:16.601230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.753 [2024-11-25 14:32:16.601305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.754 [2024-11-25 14:32:16.601320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.754 [2024-11-25 14:32:16.603868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.754 [2024-11-25 14:32:16.603947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.754 [2024-11-25 14:32:16.603962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.754 [2024-11-25 14:32:16.606634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.754 [2024-11-25 14:32:16.606711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.754 [2024-11-25 14:32:16.606726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.754 [2024-11-25 14:32:16.614840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.754 [2024-11-25 14:32:16.614942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.754 [2024-11-25 14:32:16.614958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.754 [2024-11-25 14:32:16.622516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.754 [2024-11-25 14:32:16.622843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.754 [2024-11-25 14:32:16.622860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.754 [2024-11-25 14:32:16.629145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.754 [2024-11-25 14:32:16.629397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.754 [2024-11-25 14:32:16.629412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.754 [2024-11-25 14:32:16.637235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.754 [2024-11-25 14:32:16.637506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.754 [2024-11-25 14:32:16.637522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.754 [2024-11-25 14:32:16.642449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.754 [2024-11-25 14:32:16.642515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.754 [2024-11-25 14:32:16.642531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.754 [2024-11-25 14:32:16.646195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.754 [2024-11-25 14:32:16.646275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.754 [2024-11-25 14:32:16.646291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.754 [2024-11-25 14:32:16.649696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.754 [2024-11-25 14:32:16.649762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.754 [2024-11-25 14:32:16.649777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.754 [2024-11-25 14:32:16.652846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.754 [2024-11-25 14:32:16.652900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.754 [2024-11-25 14:32:16.652915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.754 [2024-11-25 14:32:16.658401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.754 [2024-11-25 14:32:16.658489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.754 [2024-11-25 14:32:16.658504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.754 [2024-11-25 14:32:16.666024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.754 [2024-11-25 14:32:16.666287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.754 [2024-11-25 14:32:16.666303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.754 [2024-11-25 14:32:16.674557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.754 [2024-11-25 14:32:16.674629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.754 [2024-11-25 14:32:16.674644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.754 [2024-11-25 14:32:16.680128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.754 [2024-11-25 14:32:16.680185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.754 [2024-11-25 14:32:16.680200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.754 [2024-11-25 14:32:16.688240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.754 [2024-11-25 14:32:16.688323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.754 [2024-11-25 14:32:16.688338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.754 [2024-11-25 14:32:16.694582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.754 [2024-11-25 14:32:16.694873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.754 [2024-11-25 14:32:16.694890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.754 [2024-11-25 14:32:16.699748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.754 [2024-11-25 14:32:16.699794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.754 [2024-11-25 14:32:16.699809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.754 [2024-11-25 14:32:16.704086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.754 [2024-11-25 14:32:16.704135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.754 [2024-11-25 14:32:16.704151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.754 [2024-11-25 14:32:16.710804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.754 [2024-11-25 14:32:16.711092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.754 [2024-11-25 14:32:16.711115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.754 [2024-11-25 14:32:16.715807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.754 [2024-11-25 14:32:16.715934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.754 [2024-11-25 14:32:16.715949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.754 [2024-11-25 14:32:16.723004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.754 [2024-11-25 14:32:16.723062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.754 [2024-11-25 14:32:16.723081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.754 [2024-11-25 14:32:16.725786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.754 [2024-11-25 14:32:16.725843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.754 [2024-11-25 14:32:16.725858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.754 [2024-11-25 14:32:16.728506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.754 [2024-11-25 14:32:16.728575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.754 [2024-11-25 14:32:16.728590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.754 [2024-11-25 14:32:16.731255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.754 [2024-11-25 14:32:16.731320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.754 [2024-11-25 14:32:16.731335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.754 [2024-11-25 14:32:16.733939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.754 [2024-11-25 14:32:16.733996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.754 [2024-11-25 14:32:16.734011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.754 [2024-11-25 14:32:16.736670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.754 [2024-11-25 14:32:16.736721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.755 [2024-11-25 14:32:16.736735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.755 [2024-11-25 14:32:16.739440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.755 [2024-11-25 14:32:16.739490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.755 [2024-11-25 14:32:16.739505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.755 [2024-11-25 14:32:16.742003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.755 [2024-11-25 14:32:16.742048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.755 [2024-11-25 14:32:16.742063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.755 [2024-11-25 14:32:16.744534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.755 [2024-11-25 14:32:16.744581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.755 [2024-11-25 14:32:16.744596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.755 [2024-11-25 14:32:16.747943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.755 [2024-11-25 14:32:16.748020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.755 [2024-11-25 14:32:16.748034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.755 [2024-11-25 14:32:16.750942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.755 [2024-11-25 14:32:16.750999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.755 [2024-11-25 14:32:16.751014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.755 [2024-11-25 14:32:16.755002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.755 [2024-11-25 14:32:16.755061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.755 [2024-11-25 14:32:16.755076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.755 [2024-11-25 14:32:16.758034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.755 [2024-11-25 14:32:16.758118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.755 [2024-11-25 14:32:16.758132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.755 [2024-11-25 14:32:16.763128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.755 [2024-11-25 14:32:16.763361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.755 [2024-11-25 14:32:16.763377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.755 [2024-11-25 14:32:16.773191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.755 [2024-11-25 14:32:16.773473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.755 [2024-11-25 14:32:16.773489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.755 [2024-11-25 14:32:16.782285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.755 [2024-11-25 14:32:16.782610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.755 [2024-11-25 14:32:16.782626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.755 [2024-11-25 14:32:16.792602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.755 [2024-11-25 14:32:16.792821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.755 [2024-11-25 14:32:16.792836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:11.755 [2024-11-25 14:32:16.802756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.755 [2024-11-25 14:32:16.803037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.755 [2024-11-25 14:32:16.803052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:11.755 [2024-11-25 14:32:16.812758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.755 [2024-11-25 14:32:16.812922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.755 [2024-11-25 14:32:16.812936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:11.755 [2024-11-25 14:32:16.823552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.755 [2024-11-25 14:32:16.823776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.755 [2024-11-25 14:32:16.823791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:11.755 [2024-11-25 14:32:16.833992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:11.755 [2024-11-25 14:32:16.834301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:11.755 [2024-11-25 14:32:16.834316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:12.017 [2024-11-25 14:32:16.844690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.017 [2024-11-25 14:32:16.844751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.017 [2024-11-25 14:32:16.844766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:12.017 [2024-11-25 14:32:16.854256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.017 [2024-11-25 14:32:16.854563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.017 [2024-11-25 14:32:16.854578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:12.017 [2024-11-25 14:32:16.862271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.017 [2024-11-25 14:32:16.862556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.017 [2024-11-25 14:32:16.862571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:12.017 [2024-11-25 14:32:16.871544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.017 [2024-11-25 14:32:16.871615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.017 [2024-11-25 14:32:16.871630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:12.017 [2024-11-25 14:32:16.880071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.017 [2024-11-25 14:32:16.880341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.017 [2024-11-25 14:32:16.880356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:12.017 [2024-11-25 14:32:16.889965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.017 [2024-11-25 14:32:16.890264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.017 [2024-11-25 14:32:16.890283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:12.017 [2024-11-25 14:32:16.899934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.017 [2024-11-25 14:32:16.900189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.017 [2024-11-25 14:32:16.900205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:12.017 [2024-11-25 14:32:16.909723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.017 [2024-11-25 14:32:16.909993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.017 [2024-11-25 14:32:16.910007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:12.017 [2024-11-25 14:32:16.915019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.017 [2024-11-25 14:32:16.915085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.017 [2024-11-25 14:32:16.915100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:12.017 [2024-11-25 14:32:16.917741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.017 [2024-11-25 14:32:16.917786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.017 [2024-11-25 14:32:16.917801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:12.017 [2024-11-25 14:32:16.920438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.017 [2024-11-25 14:32:16.920516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.017 [2024-11-25 14:32:16.920532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:12.017 [2024-11-25 14:32:16.923340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.017 [2024-11-25 14:32:16.923410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.017 [2024-11-25 14:32:16.923425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:12.017 [2024-11-25 14:32:16.926946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.017 [2024-11-25 14:32:16.927017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.017 [2024-11-25 14:32:16.927032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:12.017 [2024-11-25 14:32:16.929809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.017 [2024-11-25 14:32:16.929865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.017 [2024-11-25 14:32:16.929880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:12.017 [2024-11-25 14:32:16.932911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.017 [2024-11-25 14:32:16.932984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.017 [2024-11-25 14:32:16.932999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:12.017 [2024-11-25 14:32:16.935818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.017 [2024-11-25 14:32:16.935864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.017 [2024-11-25 14:32:16.935879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:12.017 [2024-11-25 14:32:16.940131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.017 [2024-11-25 14:32:16.940186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.017 [2024-11-25 14:32:16.940201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:12.017 [2024-11-25 14:32:16.945382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.017 [2024-11-25 14:32:16.945435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:16.945450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:12.018 [2024-11-25 14:32:16.949271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.018 [2024-11-25 14:32:16.949338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:16.949353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:12.018 [2024-11-25 14:32:16.952012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.018 [2024-11-25 14:32:16.952058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:16.952073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:12.018 [2024-11-25 14:32:16.954964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.018 [2024-11-25 14:32:16.955026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:16.955041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:12.018 [2024-11-25 14:32:16.957989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.018 [2024-11-25 14:32:16.958043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:16.958057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:12.018 [2024-11-25 14:32:16.960663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.018 [2024-11-25 14:32:16.960716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:16.960731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:12.018 [2024-11-25 14:32:16.963171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.018 [2024-11-25 14:32:16.963279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:16.963295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:12.018 [2024-11-25 14:32:16.966637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.018 [2024-11-25 14:32:16.966724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:16.966739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:12.018 [2024-11-25 14:32:16.969353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.018 [2024-11-25 14:32:16.969404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:16.969419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:12.018 [2024-11-25 14:32:16.971838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.018 [2024-11-25 14:32:16.971890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:16.971905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:12.018 [2024-11-25 14:32:16.974326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.018 [2024-11-25 14:32:16.974374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:16.974389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:12.018 [2024-11-25 14:32:16.976811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.018 [2024-11-25 14:32:16.976872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:16.976887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:12.018 [2024-11-25 14:32:16.979655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.018 [2024-11-25 14:32:16.979753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:16.979768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:12.018 [2024-11-25 14:32:16.982774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.018 [2024-11-25 14:32:16.982840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:16.982854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:12.018 [2024-11-25 14:32:16.985358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.018 [2024-11-25 14:32:16.985433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:16.985450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:12.018 [2024-11-25 14:32:16.990461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.018 [2024-11-25 14:32:16.990809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:16.990825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:12.018 [2024-11-25 14:32:16.998028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.018 [2024-11-25 14:32:16.998345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:16.998361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:12.018 [2024-11-25 14:32:17.007499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.018 [2024-11-25 14:32:17.007849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:17.007865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:12.018 [2024-11-25 14:32:17.017966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.018 [2024-11-25 14:32:17.018212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:17.018227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:12.018 [2024-11-25 14:32:17.028673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.018 [2024-11-25 14:32:17.028965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:17.028981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:12.018 [2024-11-25 14:32:17.038652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.018 [2024-11-25 14:32:17.038917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:17.038932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:12.018 [2024-11-25 14:32:17.049478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.018 [2024-11-25 14:32:17.049738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:17.049759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:12.018 [2024-11-25 14:32:17.059558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.018 [2024-11-25 14:32:17.059795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:17.059810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:12.018 [2024-11-25 14:32:17.069459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.018 [2024-11-25 14:32:17.069721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:17.069736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:12.018 [2024-11-25 14:32:17.079692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.018 [2024-11-25 14:32:17.079935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:17.079950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:12.018 [2024-11-25 14:32:17.090384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.018 [2024-11-25 14:32:17.090600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:17.090615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:12.018 [2024-11-25 14:32:17.101141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.018 [2024-11-25 14:32:17.101444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.018 [2024-11-25 14:32:17.101460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:12.279 [2024-11-25 14:32:17.109409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.279 [2024-11-25 14:32:17.109499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.279 [2024-11-25 14:32:17.109514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:12.279 [2024-11-25 14:32:17.118052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.279 [2024-11-25 14:32:17.118363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.279 [2024-11-25 14:32:17.118379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:12.279 [2024-11-25 14:32:17.124841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.279 [2024-11-25 14:32:17.125102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.279 [2024-11-25 14:32:17.125117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:12.279 5484.00 IOPS, 685.50 MiB/s [2024-11-25T13:32:17.369Z] [2024-11-25 14:32:17.129938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1abe860) with pdu=0x2000166ff3c8 00:34:12.279 [2024-11-25 14:32:17.130154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:12.279 [2024-11-25 14:32:17.130173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:12.279 00:34:12.279 Latency(us) 00:34:12.279 [2024-11-25T13:32:17.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:12.279 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:12.279 nvme0n1 : 2.00 5482.44 685.31 0.00 0.00 2914.30 1181.01 11523.41 00:34:12.279 [2024-11-25T13:32:17.369Z] =================================================================================================================== 00:34:12.279 [2024-11-25T13:32:17.369Z] Total : 5482.44 685.31 0.00 0.00 2914.30 1181.01 11523.41 00:34:12.279 { 00:34:12.279 "results": [ 00:34:12.279 { 00:34:12.279 "job": "nvme0n1", 00:34:12.279 "core_mask": "0x2", 00:34:12.279 "workload": "randwrite", 00:34:12.279 "status": "finished", 00:34:12.279 "queue_depth": 16, 00:34:12.279 "io_size": 131072, 00:34:12.279 "runtime": 2.003304, 00:34:12.279 "iops": 5482.443004157132, 00:34:12.279 "mibps": 685.3053755196415, 00:34:12.279 "io_failed": 0, 00:34:12.279 "io_timeout": 0, 00:34:12.279 "avg_latency_us": 2914.304186469999, 00:34:12.279 "min_latency_us": 1181.0133333333333, 00:34:12.279 "max_latency_us": 11523.413333333334 00:34:12.279 } 00:34:12.279 ], 00:34:12.279 "core_count": 1 00:34:12.279 } 00:34:12.279 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:12.279 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:12.279 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:12.279 | .driver_specific 00:34:12.279 | .nvme_error 00:34:12.279 | .status_code 00:34:12.279 | .command_transient_transport_error' 00:34:12.279 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:12.279 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 355 > 0 )) 00:34:12.279 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3611694 00:34:12.279 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3611694 ']' 00:34:12.279 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3611694 00:34:12.279 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:34:12.279 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:12.279 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3611694 00:34:12.540 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:12.540 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:12.540 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3611694' 00:34:12.540 killing process with pid 3611694 00:34:12.540 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3611694 00:34:12.540 Received shutdown signal, test time was about 2.000000 seconds 00:34:12.540 00:34:12.540 Latency(us) 00:34:12.540 [2024-11-25T13:32:17.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:12.540 [2024-11-25T13:32:17.630Z] =================================================================================================================== 00:34:12.540 [2024-11-25T13:32:17.630Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:12.540 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3611694 00:34:12.540 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3609290 00:34:12.540 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3609290 ']' 00:34:12.540 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3609290 00:34:12.540 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:34:12.540 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:12.540 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3609290 00:34:12.540 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:12.540 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:12.540 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3609290' 00:34:12.540 killing process with pid 3609290 00:34:12.540 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3609290 00:34:12.540 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3609290 00:34:12.802 00:34:12.802 real 0m16.546s 00:34:12.802 user 0m32.786s 00:34:12.802 sys 0m3.572s 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:12.802 ************************************ 00:34:12.802 END TEST nvmf_digest_error 00:34:12.802 ************************************ 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:12.802 rmmod nvme_tcp 00:34:12.802 rmmod nvme_fabrics 00:34:12.802 rmmod nvme_keyring 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3609290 ']' 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3609290 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3609290 ']' 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3609290 00:34:12.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3609290) - No such process 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3609290 is not found' 00:34:12.802 Process with pid 3609290 is not found 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:12.802 14:32:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:15.348 14:32:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:15.348 00:34:15.348 real 0m43.226s 00:34:15.348 user 1m7.781s 00:34:15.348 sys 0m13.162s 00:34:15.348 14:32:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:15.348 14:32:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:15.348 ************************************ 00:34:15.348 END TEST nvmf_digest 00:34:15.348 ************************************ 00:34:15.348 14:32:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:34:15.348 14:32:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:34:15.348 14:32:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:34:15.348 14:32:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:15.348 14:32:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:15.348 14:32:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:15.348 14:32:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:15.348 ************************************ 00:34:15.348 START TEST nvmf_bdevperf 00:34:15.348 ************************************ 00:34:15.348 14:32:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:15.348 * Looking for test storage... 00:34:15.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:15.348 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:15.348 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:34:15.348 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:15.348 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:15.348 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:15.348 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:15.348 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:15.348 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:34:15.348 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:34:15.348 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:34:15.348 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:34:15.348 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:34:15.348 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:34:15.348 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:34:15.348 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:15.348 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:15.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.349 --rc genhtml_branch_coverage=1 00:34:15.349 --rc genhtml_function_coverage=1 00:34:15.349 --rc genhtml_legend=1 00:34:15.349 --rc geninfo_all_blocks=1 00:34:15.349 --rc geninfo_unexecuted_blocks=1 00:34:15.349 00:34:15.349 ' 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:15.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.349 --rc genhtml_branch_coverage=1 00:34:15.349 --rc genhtml_function_coverage=1 00:34:15.349 --rc genhtml_legend=1 00:34:15.349 --rc geninfo_all_blocks=1 00:34:15.349 --rc geninfo_unexecuted_blocks=1 00:34:15.349 00:34:15.349 ' 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:15.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.349 --rc genhtml_branch_coverage=1 00:34:15.349 --rc genhtml_function_coverage=1 00:34:15.349 --rc genhtml_legend=1 00:34:15.349 --rc geninfo_all_blocks=1 00:34:15.349 --rc geninfo_unexecuted_blocks=1 00:34:15.349 00:34:15.349 ' 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:15.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:15.349 --rc genhtml_branch_coverage=1 00:34:15.349 --rc genhtml_function_coverage=1 00:34:15.349 --rc genhtml_legend=1 00:34:15.349 --rc geninfo_all_blocks=1 00:34:15.349 --rc geninfo_unexecuted_blocks=1 00:34:15.349 00:34:15.349 ' 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:15.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:34:15.349 14:32:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:23.494 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:23.494 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:23.495 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:23.495 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:23.495 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:23.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:23.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:34:23.495 00:34:23.495 --- 10.0.0.2 ping statistics --- 00:34:23.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:23.495 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:23.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:23.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:34:23.495 00:34:23.495 --- 10.0.0.1 ping statistics --- 00:34:23.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:23.495 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3616684 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3616684 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3616684 ']' 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:23.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:23.495 14:32:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:23.495 [2024-11-25 14:32:27.768452] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:34:23.495 [2024-11-25 14:32:27.768522] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:23.495 [2024-11-25 14:32:27.866258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:23.495 [2024-11-25 14:32:27.918893] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:23.495 [2024-11-25 14:32:27.918943] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:23.495 [2024-11-25 14:32:27.918951] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:23.495 [2024-11-25 14:32:27.918958] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:23.495 [2024-11-25 14:32:27.918964] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:23.495 [2024-11-25 14:32:27.920803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:23.495 [2024-11-25 14:32:27.920966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:23.495 [2024-11-25 14:32:27.920967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:23.757 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:23.757 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:34:23.757 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:23.757 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:23.757 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:23.757 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:23.757 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:23.757 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:23.758 [2024-11-25 14:32:28.648082] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:23.758 Malloc0 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:23.758 [2024-11-25 14:32:28.720607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:23.758 { 00:34:23.758 "params": { 00:34:23.758 "name": "Nvme$subsystem", 00:34:23.758 "trtype": "$TEST_TRANSPORT", 00:34:23.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:23.758 "adrfam": "ipv4", 00:34:23.758 "trsvcid": "$NVMF_PORT", 00:34:23.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:23.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:23.758 "hdgst": ${hdgst:-false}, 00:34:23.758 "ddgst": ${ddgst:-false} 00:34:23.758 }, 00:34:23.758 "method": "bdev_nvme_attach_controller" 00:34:23.758 } 00:34:23.758 EOF 00:34:23.758 )") 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:34:23.758 14:32:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:23.758 "params": { 00:34:23.758 "name": "Nvme1", 00:34:23.758 "trtype": "tcp", 00:34:23.758 "traddr": "10.0.0.2", 00:34:23.758 "adrfam": "ipv4", 00:34:23.758 "trsvcid": "4420", 00:34:23.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:23.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:23.758 "hdgst": false, 00:34:23.758 "ddgst": false 00:34:23.758 }, 00:34:23.758 "method": "bdev_nvme_attach_controller" 00:34:23.758 }' 00:34:23.758 [2024-11-25 14:32:28.780969] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:34:23.758 [2024-11-25 14:32:28.781035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3616747 ] 00:34:24.065 [2024-11-25 14:32:28.875053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:24.065 [2024-11-25 14:32:28.928526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:24.382 Running I/O for 1 seconds... 00:34:25.365 8477.00 IOPS, 33.11 MiB/s 00:34:25.365 Latency(us) 00:34:25.365 [2024-11-25T13:32:30.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:25.365 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:25.365 Verification LBA range: start 0x0 length 0x4000 00:34:25.365 Nvme1n1 : 1.02 8553.58 33.41 0.00 0.00 14898.15 2771.63 12670.29 00:34:25.365 [2024-11-25T13:32:30.455Z] =================================================================================================================== 00:34:25.365 [2024-11-25T13:32:30.455Z] Total : 8553.58 33.41 0.00 0.00 14898.15 2771.63 12670.29 00:34:25.365 14:32:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3617086 00:34:25.365 14:32:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:34:25.365 14:32:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:34:25.365 14:32:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:34:25.365 14:32:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:34:25.365 14:32:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:34:25.365 14:32:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:25.365 14:32:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:25.365 { 00:34:25.365 "params": { 00:34:25.365 "name": "Nvme$subsystem", 00:34:25.365 "trtype": "$TEST_TRANSPORT", 00:34:25.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:25.365 "adrfam": "ipv4", 00:34:25.365 "trsvcid": "$NVMF_PORT", 00:34:25.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:25.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:25.365 "hdgst": ${hdgst:-false}, 00:34:25.365 "ddgst": ${ddgst:-false} 00:34:25.365 }, 00:34:25.365 "method": "bdev_nvme_attach_controller" 00:34:25.365 } 00:34:25.365 EOF 00:34:25.365 )") 00:34:25.365 14:32:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:34:25.365 14:32:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:34:25.365 14:32:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:34:25.365 14:32:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:25.365 "params": { 00:34:25.365 "name": "Nvme1", 00:34:25.365 "trtype": "tcp", 00:34:25.365 "traddr": "10.0.0.2", 00:34:25.365 "adrfam": "ipv4", 00:34:25.365 "trsvcid": "4420", 00:34:25.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:25.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:25.365 "hdgst": false, 00:34:25.366 "ddgst": false 00:34:25.366 }, 00:34:25.366 "method": "bdev_nvme_attach_controller" 00:34:25.366 }' 00:34:25.366 [2024-11-25 14:32:30.357385] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:34:25.366 [2024-11-25 14:32:30.357456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3617086 ] 00:34:25.366 [2024-11-25 14:32:30.451826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:25.627 [2024-11-25 14:32:30.504920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:25.889 Running I/O for 15 seconds... 00:34:27.786 9771.00 IOPS, 38.17 MiB/s [2024-11-25T13:32:33.452Z] 10442.50 IOPS, 40.79 MiB/s [2024-11-25T13:32:33.452Z] 14:32:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3616684 00:34:28.362 14:32:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:34:28.362 [2024-11-25 14:32:33.320146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.362 [2024-11-25 14:32:33.320192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.362 [2024-11-25 14:32:33.320212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.362 [2024-11-25 14:32:33.320222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.362 [2024-11-25 14:32:33.320233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.362 [2024-11-25 14:32:33.320241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.362 [2024-11-25 14:32:33.320251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.362 [2024-11-25 14:32:33.320259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.362 [2024-11-25 14:32:33.320275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.362 [2024-11-25 14:32:33.320284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.362 [2024-11-25 14:32:33.320294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.362 [2024-11-25 14:32:33.320302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.320988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.320998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.321005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.321015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.363 [2024-11-25 14:32:33.321022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.363 [2024-11-25 14:32:33.321032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.364 [2024-11-25 14:32:33.321039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.364 [2024-11-25 14:32:33.321057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.364 [2024-11-25 14:32:33.321073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.364 [2024-11-25 14:32:33.321090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.364 [2024-11-25 14:32:33.321107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.364 [2024-11-25 14:32:33.321124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.364 [2024-11-25 14:32:33.321140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.364 [2024-11-25 14:32:33.321161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.364 [2024-11-25 14:32:33.321178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.364 [2024-11-25 14:32:33.321200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.364 [2024-11-25 14:32:33.321217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.364 [2024-11-25 14:32:33.321234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.364 [2024-11-25 14:32:33.321251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.364 [2024-11-25 14:32:33.321268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.364 [2024-11-25 14:32:33.321286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.364 [2024-11-25 14:32:33.321303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.364 [2024-11-25 14:32:33.321320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.364 [2024-11-25 14:32:33.321337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.364 [2024-11-25 14:32:33.321354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.364 [2024-11-25 14:32:33.321372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.364 [2024-11-25 14:32:33.321390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.364 [2024-11-25 14:32:33.321407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.364 [2024-11-25 14:32:33.321426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.364 [2024-11-25 14:32:33.321444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.364 [2024-11-25 14:32:33.321460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.364 [2024-11-25 14:32:33.321478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.364 [2024-11-25 14:32:33.321495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.364 [2024-11-25 14:32:33.321512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.364 [2024-11-25 14:32:33.321529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.364 [2024-11-25 14:32:33.321547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.364 [2024-11-25 14:32:33.321563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.364 [2024-11-25 14:32:33.321580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.364 [2024-11-25 14:32:33.321598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:28.364 [2024-11-25 14:32:33.321615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.364 [2024-11-25 14:32:33.321633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.364 [2024-11-25 14:32:33.321650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.364 [2024-11-25 14:32:33.321667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.364 [2024-11-25 14:32:33.321684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.364 [2024-11-25 14:32:33.321701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.364 [2024-11-25 14:32:33.321710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.364 [2024-11-25 14:32:33.321718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.321727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.321734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.321744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.321752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.321761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.321769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.321778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.321785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.321794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.321802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.321812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.321819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.321829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.321836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.321846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.321855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.321864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.321872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.321881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.321888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.321898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.321905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.321915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.321922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.321932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.321940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.321949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.321957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.321966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.321974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.321983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.321990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.322000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.322008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.322017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.322025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.322034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.322041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.322051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.322058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.322069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.322077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.322086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.322093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.322103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.322111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.322120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.322127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.322137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.322145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.322154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.322255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.322265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.322272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.322281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.322289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.322298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.322306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.322315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.322322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.322332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.322339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.322348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.322356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.322366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.322375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.322384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.322391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.322400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.322408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.322417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.322425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.322434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.322442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.322451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.322459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.322469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.322476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.365 [2024-11-25 14:32:33.322485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.365 [2024-11-25 14:32:33.322492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.366 [2024-11-25 14:32:33.322502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.366 [2024-11-25 14:32:33.322510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.366 [2024-11-25 14:32:33.322519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:28.366 [2024-11-25 14:32:33.322527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.366 [2024-11-25 14:32:33.322535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3a400 is same with the state(6) to be set 00:34:28.366 [2024-11-25 14:32:33.322545] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:28.366 [2024-11-25 14:32:33.322550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:28.366 [2024-11-25 14:32:33.322557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82904 len:8 PRP1 0x0 PRP2 0x0 00:34:28.366 [2024-11-25 14:32:33.322565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:28.366 [2024-11-25 14:32:33.326086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.366 [2024-11-25 14:32:33.326139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.366 [2024-11-25 14:32:33.326948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.366 [2024-11-25 14:32:33.326966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.366 [2024-11-25 14:32:33.326975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.366 [2024-11-25 14:32:33.327200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.366 [2024-11-25 14:32:33.327421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.366 [2024-11-25 14:32:33.327430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.366 [2024-11-25 14:32:33.327439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.366 [2024-11-25 14:32:33.327448] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.366 [2024-11-25 14:32:33.340208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.366 [2024-11-25 14:32:33.340822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.366 [2024-11-25 14:32:33.340862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.366 [2024-11-25 14:32:33.340873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.366 [2024-11-25 14:32:33.341114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.366 [2024-11-25 14:32:33.341349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.366 [2024-11-25 14:32:33.341360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.366 [2024-11-25 14:32:33.341369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.366 [2024-11-25 14:32:33.341377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.366 [2024-11-25 14:32:33.354125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.366 [2024-11-25 14:32:33.354764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.366 [2024-11-25 14:32:33.354805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.366 [2024-11-25 14:32:33.354816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.366 [2024-11-25 14:32:33.355057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.366 [2024-11-25 14:32:33.355289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.366 [2024-11-25 14:32:33.355301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.366 [2024-11-25 14:32:33.355309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.366 [2024-11-25 14:32:33.355317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.366 [2024-11-25 14:32:33.368078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.366 [2024-11-25 14:32:33.368750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.366 [2024-11-25 14:32:33.368792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.366 [2024-11-25 14:32:33.368804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.366 [2024-11-25 14:32:33.369049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.366 [2024-11-25 14:32:33.369284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.366 [2024-11-25 14:32:33.369295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.366 [2024-11-25 14:32:33.369303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.366 [2024-11-25 14:32:33.369311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.366 [2024-11-25 14:32:33.382072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.366 [2024-11-25 14:32:33.382722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.366 [2024-11-25 14:32:33.382765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.366 [2024-11-25 14:32:33.382777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.366 [2024-11-25 14:32:33.383018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.366 [2024-11-25 14:32:33.383251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.366 [2024-11-25 14:32:33.383261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.366 [2024-11-25 14:32:33.383270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.366 [2024-11-25 14:32:33.383278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.366 [2024-11-25 14:32:33.396034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.366 [2024-11-25 14:32:33.396670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.366 [2024-11-25 14:32:33.396714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.366 [2024-11-25 14:32:33.396726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.366 [2024-11-25 14:32:33.396967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.366 [2024-11-25 14:32:33.397202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.366 [2024-11-25 14:32:33.397214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.366 [2024-11-25 14:32:33.397222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.366 [2024-11-25 14:32:33.397231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.366 [2024-11-25 14:32:33.410002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.366 [2024-11-25 14:32:33.410658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.366 [2024-11-25 14:32:33.410704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.366 [2024-11-25 14:32:33.410715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.366 [2024-11-25 14:32:33.410971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.366 [2024-11-25 14:32:33.411208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.366 [2024-11-25 14:32:33.411224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.366 [2024-11-25 14:32:33.411232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.366 [2024-11-25 14:32:33.411241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.366 [2024-11-25 14:32:33.424003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.366 [2024-11-25 14:32:33.424613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.366 [2024-11-25 14:32:33.424637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.366 [2024-11-25 14:32:33.424646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.366 [2024-11-25 14:32:33.424867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.366 [2024-11-25 14:32:33.425088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.366 [2024-11-25 14:32:33.425098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.366 [2024-11-25 14:32:33.425105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.366 [2024-11-25 14:32:33.425112] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.366 [2024-11-25 14:32:33.437879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.366 [2024-11-25 14:32:33.438425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.366 [2024-11-25 14:32:33.438447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.366 [2024-11-25 14:32:33.438455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.366 [2024-11-25 14:32:33.438674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.366 [2024-11-25 14:32:33.438895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.366 [2024-11-25 14:32:33.438907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.366 [2024-11-25 14:32:33.438915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.367 [2024-11-25 14:32:33.438922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.629 [2024-11-25 14:32:33.451694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.629 [2024-11-25 14:32:33.452173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.629 [2024-11-25 14:32:33.452196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.629 [2024-11-25 14:32:33.452205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.629 [2024-11-25 14:32:33.452426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.629 [2024-11-25 14:32:33.452647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.630 [2024-11-25 14:32:33.452659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.630 [2024-11-25 14:32:33.452667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.630 [2024-11-25 14:32:33.452680] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.630 [2024-11-25 14:32:33.465670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.630 [2024-11-25 14:32:33.466406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.630 [2024-11-25 14:32:33.466460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.630 [2024-11-25 14:32:33.466473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.630 [2024-11-25 14:32:33.466721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.630 [2024-11-25 14:32:33.466949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.630 [2024-11-25 14:32:33.466960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.630 [2024-11-25 14:32:33.466968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.630 [2024-11-25 14:32:33.466978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.630 [2024-11-25 14:32:33.479561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.630 [2024-11-25 14:32:33.480176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.630 [2024-11-25 14:32:33.480206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.630 [2024-11-25 14:32:33.480215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.630 [2024-11-25 14:32:33.480438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.630 [2024-11-25 14:32:33.480661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.630 [2024-11-25 14:32:33.480673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.630 [2024-11-25 14:32:33.480681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.630 [2024-11-25 14:32:33.480689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.630 [2024-11-25 14:32:33.493475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.630 [2024-11-25 14:32:33.494148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.630 [2024-11-25 14:32:33.494224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.630 [2024-11-25 14:32:33.494237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.630 [2024-11-25 14:32:33.494493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.630 [2024-11-25 14:32:33.494723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.630 [2024-11-25 14:32:33.494735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.630 [2024-11-25 14:32:33.494744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.630 [2024-11-25 14:32:33.494754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.630 [2024-11-25 14:32:33.507336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.630 [2024-11-25 14:32:33.508004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.630 [2024-11-25 14:32:33.508076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.630 [2024-11-25 14:32:33.508090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.630 [2024-11-25 14:32:33.508361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.630 [2024-11-25 14:32:33.508590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.630 [2024-11-25 14:32:33.508602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.630 [2024-11-25 14:32:33.508612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.630 [2024-11-25 14:32:33.508621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.630 [2024-11-25 14:32:33.521225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.630 [2024-11-25 14:32:33.521909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.630 [2024-11-25 14:32:33.521974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.630 [2024-11-25 14:32:33.521987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.630 [2024-11-25 14:32:33.522257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.630 [2024-11-25 14:32:33.522488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.630 [2024-11-25 14:32:33.522500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.630 [2024-11-25 14:32:33.522509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.630 [2024-11-25 14:32:33.522519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.630 [2024-11-25 14:32:33.535107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.630 [2024-11-25 14:32:33.535790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.630 [2024-11-25 14:32:33.535856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.630 [2024-11-25 14:32:33.535869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.630 [2024-11-25 14:32:33.536125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.630 [2024-11-25 14:32:33.536370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.630 [2024-11-25 14:32:33.536385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.630 [2024-11-25 14:32:33.536394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.630 [2024-11-25 14:32:33.536404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.630 [2024-11-25 14:32:33.548989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.630 [2024-11-25 14:32:33.549629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.630 [2024-11-25 14:32:33.549660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.630 [2024-11-25 14:32:33.549671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.630 [2024-11-25 14:32:33.549903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.630 [2024-11-25 14:32:33.550126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.630 [2024-11-25 14:32:33.550137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.630 [2024-11-25 14:32:33.550146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.630 [2024-11-25 14:32:33.550154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.630 [2024-11-25 14:32:33.562945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.630 [2024-11-25 14:32:33.563660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.630 [2024-11-25 14:32:33.563725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.630 [2024-11-25 14:32:33.563738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.630 [2024-11-25 14:32:33.563994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.630 [2024-11-25 14:32:33.564251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.630 [2024-11-25 14:32:33.564264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.630 [2024-11-25 14:32:33.564273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.630 [2024-11-25 14:32:33.564282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.630 [2024-11-25 14:32:33.576870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.630 [2024-11-25 14:32:33.577467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.630 [2024-11-25 14:32:33.577499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.630 [2024-11-25 14:32:33.577508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.630 [2024-11-25 14:32:33.577732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.630 [2024-11-25 14:32:33.577955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.630 [2024-11-25 14:32:33.577966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.630 [2024-11-25 14:32:33.577974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.630 [2024-11-25 14:32:33.577983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.630 [2024-11-25 14:32:33.590796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.630 [2024-11-25 14:32:33.591503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.630 [2024-11-25 14:32:33.591568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.630 [2024-11-25 14:32:33.591582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.631 [2024-11-25 14:32:33.591839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.631 [2024-11-25 14:32:33.592068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.631 [2024-11-25 14:32:33.592088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.631 [2024-11-25 14:32:33.592097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.631 [2024-11-25 14:32:33.592107] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.631 [2024-11-25 14:32:33.604722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.631 [2024-11-25 14:32:33.605328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.631 [2024-11-25 14:32:33.605390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.631 [2024-11-25 14:32:33.605405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.631 [2024-11-25 14:32:33.605662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.631 [2024-11-25 14:32:33.605891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.631 [2024-11-25 14:32:33.605904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.631 [2024-11-25 14:32:33.605914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.631 [2024-11-25 14:32:33.605924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.631 [2024-11-25 14:32:33.618746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.631 [2024-11-25 14:32:33.619472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.631 [2024-11-25 14:32:33.619537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.631 [2024-11-25 14:32:33.619550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.631 [2024-11-25 14:32:33.619807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.631 [2024-11-25 14:32:33.620036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.631 [2024-11-25 14:32:33.620048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.631 [2024-11-25 14:32:33.620056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.631 [2024-11-25 14:32:33.620066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.631 [2024-11-25 14:32:33.632667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.631 [2024-11-25 14:32:33.633276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.631 [2024-11-25 14:32:33.633342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.631 [2024-11-25 14:32:33.633356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.631 [2024-11-25 14:32:33.633613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.631 [2024-11-25 14:32:33.633841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.631 [2024-11-25 14:32:33.633853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.631 [2024-11-25 14:32:33.633862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.631 [2024-11-25 14:32:33.633878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.631 [2024-11-25 14:32:33.646678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.631 [2024-11-25 14:32:33.647465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.631 [2024-11-25 14:32:33.647529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.631 [2024-11-25 14:32:33.647543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.631 [2024-11-25 14:32:33.647800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.631 [2024-11-25 14:32:33.648028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.631 [2024-11-25 14:32:33.648039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.631 [2024-11-25 14:32:33.648049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.631 [2024-11-25 14:32:33.648058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.631 [2024-11-25 14:32:33.660665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.631 [2024-11-25 14:32:33.661311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.631 [2024-11-25 14:32:33.661377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.631 [2024-11-25 14:32:33.661390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.631 [2024-11-25 14:32:33.661645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.631 [2024-11-25 14:32:33.661875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.631 [2024-11-25 14:32:33.661888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.631 [2024-11-25 14:32:33.661897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.631 [2024-11-25 14:32:33.661907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.631 [2024-11-25 14:32:33.674530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.631 [2024-11-25 14:32:33.675218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.631 [2024-11-25 14:32:33.675285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.631 [2024-11-25 14:32:33.675298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.631 [2024-11-25 14:32:33.675554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.631 [2024-11-25 14:32:33.675783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.631 [2024-11-25 14:32:33.675795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.631 [2024-11-25 14:32:33.675803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.631 [2024-11-25 14:32:33.675812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.631 [2024-11-25 14:32:33.688412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.631 [2024-11-25 14:32:33.689039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.631 [2024-11-25 14:32:33.689081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.631 [2024-11-25 14:32:33.689092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.631 [2024-11-25 14:32:33.689328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.631 [2024-11-25 14:32:33.689552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.631 [2024-11-25 14:32:33.689565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.631 [2024-11-25 14:32:33.689573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.631 [2024-11-25 14:32:33.689581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.631 [2024-11-25 14:32:33.702277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.631 [2024-11-25 14:32:33.702978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.631 [2024-11-25 14:32:33.703044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.631 [2024-11-25 14:32:33.703057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.631 [2024-11-25 14:32:33.703328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.631 [2024-11-25 14:32:33.703558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.631 [2024-11-25 14:32:33.703570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.631 [2024-11-25 14:32:33.703579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.631 [2024-11-25 14:32:33.703589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.631 [2024-11-25 14:32:33.716193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.895 [2024-11-25 14:32:33.716898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.895 [2024-11-25 14:32:33.716965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.895 [2024-11-25 14:32:33.716978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.895 [2024-11-25 14:32:33.717250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.895 [2024-11-25 14:32:33.717481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.895 [2024-11-25 14:32:33.717493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.895 [2024-11-25 14:32:33.717503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.895 [2024-11-25 14:32:33.717513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.895 [2024-11-25 14:32:33.730098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.895 [2024-11-25 14:32:33.730806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.895 [2024-11-25 14:32:33.730872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.895 [2024-11-25 14:32:33.730886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.895 [2024-11-25 14:32:33.731149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.895 [2024-11-25 14:32:33.731392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.895 [2024-11-25 14:32:33.731405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.895 [2024-11-25 14:32:33.731414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.895 [2024-11-25 14:32:33.731424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.895 [2024-11-25 14:32:33.744003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.895 [2024-11-25 14:32:33.744783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.895 [2024-11-25 14:32:33.744848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.895 [2024-11-25 14:32:33.744861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.895 [2024-11-25 14:32:33.745119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.895 [2024-11-25 14:32:33.745362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.895 [2024-11-25 14:32:33.745375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.895 [2024-11-25 14:32:33.745384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.895 [2024-11-25 14:32:33.745394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.895 [2024-11-25 14:32:33.757969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.895 [2024-11-25 14:32:33.758697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.895 [2024-11-25 14:32:33.758762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.895 [2024-11-25 14:32:33.758775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.895 [2024-11-25 14:32:33.759031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.895 [2024-11-25 14:32:33.759276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.895 [2024-11-25 14:32:33.759290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.895 [2024-11-25 14:32:33.759298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.895 [2024-11-25 14:32:33.759308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.895 [2024-11-25 14:32:33.771917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.895 [2024-11-25 14:32:33.772553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.895 [2024-11-25 14:32:33.772584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.895 [2024-11-25 14:32:33.772594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.895 [2024-11-25 14:32:33.772817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.895 [2024-11-25 14:32:33.773041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.895 [2024-11-25 14:32:33.773053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.895 [2024-11-25 14:32:33.773069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.895 [2024-11-25 14:32:33.773077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.895 [2024-11-25 14:32:33.785860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.895 [2024-11-25 14:32:33.786544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.895 [2024-11-25 14:32:33.786609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.895 [2024-11-25 14:32:33.786622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.895 [2024-11-25 14:32:33.786878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.895 [2024-11-25 14:32:33.787107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.895 [2024-11-25 14:32:33.787119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.895 [2024-11-25 14:32:33.787129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.895 [2024-11-25 14:32:33.787138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.895 [2024-11-25 14:32:33.799746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.895 [2024-11-25 14:32:33.800506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.895 [2024-11-25 14:32:33.800572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.895 [2024-11-25 14:32:33.800585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.895 [2024-11-25 14:32:33.800841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.895 [2024-11-25 14:32:33.801070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.895 [2024-11-25 14:32:33.801082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.895 [2024-11-25 14:32:33.801091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.895 [2024-11-25 14:32:33.801101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.895 8873.33 IOPS, 34.66 MiB/s [2024-11-25T13:32:33.985Z] [2024-11-25 14:32:33.815378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.895 [2024-11-25 14:32:33.816061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.895 [2024-11-25 14:32:33.816127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.895 [2024-11-25 14:32:33.816139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.895 [2024-11-25 14:32:33.816410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.895 [2024-11-25 14:32:33.816641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.895 [2024-11-25 14:32:33.816653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.895 [2024-11-25 14:32:33.816661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.895 [2024-11-25 14:32:33.816671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.895 [2024-11-25 14:32:33.829257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.895 [2024-11-25 14:32:33.829948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.895 [2024-11-25 14:32:33.830014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.895 [2024-11-25 14:32:33.830027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.895 [2024-11-25 14:32:33.830299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.895 [2024-11-25 14:32:33.830529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.895 [2024-11-25 14:32:33.830542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.895 [2024-11-25 14:32:33.830551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.895 [2024-11-25 14:32:33.830560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.895 [2024-11-25 14:32:33.843194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.896 [2024-11-25 14:32:33.843787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.896 [2024-11-25 14:32:33.843847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.896 [2024-11-25 14:32:33.843860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.896 [2024-11-25 14:32:33.844116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.896 [2024-11-25 14:32:33.844358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.896 [2024-11-25 14:32:33.844372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.896 [2024-11-25 14:32:33.844381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.896 [2024-11-25 14:32:33.844390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.896 [2024-11-25 14:32:33.857201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.896 [2024-11-25 14:32:33.857890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.896 [2024-11-25 14:32:33.857954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.896 [2024-11-25 14:32:33.857968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.896 [2024-11-25 14:32:33.858240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.896 [2024-11-25 14:32:33.858470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.896 [2024-11-25 14:32:33.858482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.896 [2024-11-25 14:32:33.858491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.896 [2024-11-25 14:32:33.858500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.896 [2024-11-25 14:32:33.871105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.896 [2024-11-25 14:32:33.871747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.896 [2024-11-25 14:32:33.871787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.896 [2024-11-25 14:32:33.871797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.896 [2024-11-25 14:32:33.872021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.896 [2024-11-25 14:32:33.872257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.896 [2024-11-25 14:32:33.872269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.896 [2024-11-25 14:32:33.872277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.896 [2024-11-25 14:32:33.872288] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.896 [2024-11-25 14:32:33.885068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.896 [2024-11-25 14:32:33.885683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.896 [2024-11-25 14:32:33.885710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.896 [2024-11-25 14:32:33.885719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.896 [2024-11-25 14:32:33.885942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.896 [2024-11-25 14:32:33.886175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.896 [2024-11-25 14:32:33.886187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.896 [2024-11-25 14:32:33.886196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.896 [2024-11-25 14:32:33.886204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.896 [2024-11-25 14:32:33.898988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.896 [2024-11-25 14:32:33.900266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.896 [2024-11-25 14:32:33.900333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.896 [2024-11-25 14:32:33.900348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.896 [2024-11-25 14:32:33.900609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.896 [2024-11-25 14:32:33.900842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.896 [2024-11-25 14:32:33.900856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.896 [2024-11-25 14:32:33.900870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.896 [2024-11-25 14:32:33.900883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.896 [2024-11-25 14:32:33.912871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.896 [2024-11-25 14:32:33.913634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.896 [2024-11-25 14:32:33.913699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.896 [2024-11-25 14:32:33.913711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.896 [2024-11-25 14:32:33.913975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.896 [2024-11-25 14:32:33.914221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.896 [2024-11-25 14:32:33.914234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.896 [2024-11-25 14:32:33.914243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.896 [2024-11-25 14:32:33.914252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.896 [2024-11-25 14:32:33.926822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.896 [2024-11-25 14:32:33.927521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.896 [2024-11-25 14:32:33.927585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.896 [2024-11-25 14:32:33.927598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.896 [2024-11-25 14:32:33.927854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.896 [2024-11-25 14:32:33.928084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.896 [2024-11-25 14:32:33.928095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.896 [2024-11-25 14:32:33.928104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.896 [2024-11-25 14:32:33.928115] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.896 [2024-11-25 14:32:33.940716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.896 [2024-11-25 14:32:33.941423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.896 [2024-11-25 14:32:33.941488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.896 [2024-11-25 14:32:33.941502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.896 [2024-11-25 14:32:33.941758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.896 [2024-11-25 14:32:33.941988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.896 [2024-11-25 14:32:33.942000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.896 [2024-11-25 14:32:33.942009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.896 [2024-11-25 14:32:33.942018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.896 [2024-11-25 14:32:33.954616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.896 [2024-11-25 14:32:33.955331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.896 [2024-11-25 14:32:33.955397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.896 [2024-11-25 14:32:33.955410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.896 [2024-11-25 14:32:33.955667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.896 [2024-11-25 14:32:33.955896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.896 [2024-11-25 14:32:33.955909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.896 [2024-11-25 14:32:33.955924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.896 [2024-11-25 14:32:33.955934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:28.896 [2024-11-25 14:32:33.968551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:28.896 [2024-11-25 14:32:33.969240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.896 [2024-11-25 14:32:33.969306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:28.896 [2024-11-25 14:32:33.969320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:28.896 [2024-11-25 14:32:33.969576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:28.896 [2024-11-25 14:32:33.969805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:28.896 [2024-11-25 14:32:33.969819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:28.896 [2024-11-25 14:32:33.969827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:28.896 [2024-11-25 14:32:33.969837] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.160 [2024-11-25 14:32:33.982450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.160 [2024-11-25 14:32:33.983091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.160 [2024-11-25 14:32:33.983121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.160 [2024-11-25 14:32:33.983131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.161 [2024-11-25 14:32:33.983366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.161 [2024-11-25 14:32:33.983591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.161 [2024-11-25 14:32:33.983603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.161 [2024-11-25 14:32:33.983613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.161 [2024-11-25 14:32:33.983622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.161 [2024-11-25 14:32:33.996415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.161 [2024-11-25 14:32:33.997021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.161 [2024-11-25 14:32:33.997047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.161 [2024-11-25 14:32:33.997056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.161 [2024-11-25 14:32:33.997289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.161 [2024-11-25 14:32:33.997513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.161 [2024-11-25 14:32:33.997525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.161 [2024-11-25 14:32:33.997535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.161 [2024-11-25 14:32:33.997544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.161 [2024-11-25 14:32:34.010368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.161 [2024-11-25 14:32:34.011071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.161 [2024-11-25 14:32:34.011138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.161 [2024-11-25 14:32:34.011151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.161 [2024-11-25 14:32:34.011422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.161 [2024-11-25 14:32:34.011652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.161 [2024-11-25 14:32:34.011664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.161 [2024-11-25 14:32:34.011673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.161 [2024-11-25 14:32:34.011683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.161 [2024-11-25 14:32:34.024276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.161 [2024-11-25 14:32:34.024880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.161 [2024-11-25 14:32:34.024910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.161 [2024-11-25 14:32:34.024920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.161 [2024-11-25 14:32:34.025143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.161 [2024-11-25 14:32:34.025383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.161 [2024-11-25 14:32:34.025395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.161 [2024-11-25 14:32:34.025403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.161 [2024-11-25 14:32:34.025412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.161 [2024-11-25 14:32:34.038208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.161 [2024-11-25 14:32:34.038890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.161 [2024-11-25 14:32:34.038955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.161 [2024-11-25 14:32:34.038968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.161 [2024-11-25 14:32:34.039242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.161 [2024-11-25 14:32:34.039472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.161 [2024-11-25 14:32:34.039484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.161 [2024-11-25 14:32:34.039493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.161 [2024-11-25 14:32:34.039503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.161 [2024-11-25 14:32:34.052076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.161 [2024-11-25 14:32:34.052820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.161 [2024-11-25 14:32:34.052893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.161 [2024-11-25 14:32:34.052906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.161 [2024-11-25 14:32:34.053180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.161 [2024-11-25 14:32:34.053410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.161 [2024-11-25 14:32:34.053423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.161 [2024-11-25 14:32:34.053432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.161 [2024-11-25 14:32:34.053441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.161 [2024-11-25 14:32:34.066032] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.161 [2024-11-25 14:32:34.066671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.161 [2024-11-25 14:32:34.066703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.161 [2024-11-25 14:32:34.066713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.161 [2024-11-25 14:32:34.066937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.161 [2024-11-25 14:32:34.067172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.161 [2024-11-25 14:32:34.067183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.161 [2024-11-25 14:32:34.067193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.161 [2024-11-25 14:32:34.067203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.161 [2024-11-25 14:32:34.079994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.161 [2024-11-25 14:32:34.080708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.161 [2024-11-25 14:32:34.080774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.161 [2024-11-25 14:32:34.080787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.161 [2024-11-25 14:32:34.081044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.161 [2024-11-25 14:32:34.081291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.161 [2024-11-25 14:32:34.081306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.161 [2024-11-25 14:32:34.081315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.161 [2024-11-25 14:32:34.081325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.161 [2024-11-25 14:32:34.093995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.161 [2024-11-25 14:32:34.094680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.161 [2024-11-25 14:32:34.094742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.161 [2024-11-25 14:32:34.094755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.161 [2024-11-25 14:32:34.095009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.161 [2024-11-25 14:32:34.095258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.161 [2024-11-25 14:32:34.095271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.161 [2024-11-25 14:32:34.095280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.161 [2024-11-25 14:32:34.095289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.161 [2024-11-25 14:32:34.107881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.161 [2024-11-25 14:32:34.108518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.161 [2024-11-25 14:32:34.108550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.161 [2024-11-25 14:32:34.108559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.161 [2024-11-25 14:32:34.108782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.161 [2024-11-25 14:32:34.109006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.161 [2024-11-25 14:32:34.109018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.161 [2024-11-25 14:32:34.109027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.161 [2024-11-25 14:32:34.109035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.161 [2024-11-25 14:32:34.121835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.161 [2024-11-25 14:32:34.122512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.162 [2024-11-25 14:32:34.122570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.162 [2024-11-25 14:32:34.122580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.162 [2024-11-25 14:32:34.122766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.162 [2024-11-25 14:32:34.122926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.162 [2024-11-25 14:32:34.122935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.162 [2024-11-25 14:32:34.122942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.162 [2024-11-25 14:32:34.122950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.162 [2024-11-25 14:32:34.134480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.162 [2024-11-25 14:32:34.135126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.162 [2024-11-25 14:32:34.135190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.162 [2024-11-25 14:32:34.135201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.162 [2024-11-25 14:32:34.135383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.162 [2024-11-25 14:32:34.135543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.162 [2024-11-25 14:32:34.135552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.162 [2024-11-25 14:32:34.135565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.162 [2024-11-25 14:32:34.135572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.162 [2024-11-25 14:32:34.147252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.162 [2024-11-25 14:32:34.147852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.162 [2024-11-25 14:32:34.147902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.162 [2024-11-25 14:32:34.147911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.162 [2024-11-25 14:32:34.148090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.162 [2024-11-25 14:32:34.148262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.162 [2024-11-25 14:32:34.148272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.162 [2024-11-25 14:32:34.148279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.162 [2024-11-25 14:32:34.148286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.162 [2024-11-25 14:32:34.159929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.162 [2024-11-25 14:32:34.160431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.162 [2024-11-25 14:32:34.160477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.162 [2024-11-25 14:32:34.160487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.162 [2024-11-25 14:32:34.160664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.162 [2024-11-25 14:32:34.160822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.162 [2024-11-25 14:32:34.160830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.162 [2024-11-25 14:32:34.160836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.162 [2024-11-25 14:32:34.160843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.162 [2024-11-25 14:32:34.172673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.162 [2024-11-25 14:32:34.173251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.162 [2024-11-25 14:32:34.173297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.162 [2024-11-25 14:32:34.173306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.162 [2024-11-25 14:32:34.173481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.162 [2024-11-25 14:32:34.173638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.162 [2024-11-25 14:32:34.173647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.162 [2024-11-25 14:32:34.173654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.162 [2024-11-25 14:32:34.173661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.162 [2024-11-25 14:32:34.185319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.162 [2024-11-25 14:32:34.185886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.162 [2024-11-25 14:32:34.185927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.162 [2024-11-25 14:32:34.185936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.162 [2024-11-25 14:32:34.186109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.162 [2024-11-25 14:32:34.186272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.162 [2024-11-25 14:32:34.186281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.162 [2024-11-25 14:32:34.186289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.162 [2024-11-25 14:32:34.186296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.162 [2024-11-25 14:32:34.198070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.162 [2024-11-25 14:32:34.198639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.162 [2024-11-25 14:32:34.198678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.162 [2024-11-25 14:32:34.198687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.162 [2024-11-25 14:32:34.198859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.162 [2024-11-25 14:32:34.199014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.162 [2024-11-25 14:32:34.199022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.162 [2024-11-25 14:32:34.199030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.162 [2024-11-25 14:32:34.199037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.162 [2024-11-25 14:32:34.210818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.162 [2024-11-25 14:32:34.211405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.162 [2024-11-25 14:32:34.211442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.162 [2024-11-25 14:32:34.211452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.162 [2024-11-25 14:32:34.211625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.162 [2024-11-25 14:32:34.211780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.162 [2024-11-25 14:32:34.211787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.162 [2024-11-25 14:32:34.211793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.162 [2024-11-25 14:32:34.211800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.162 [2024-11-25 14:32:34.223443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.162 [2024-11-25 14:32:34.224055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.162 [2024-11-25 14:32:34.224090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.162 [2024-11-25 14:32:34.224104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.162 [2024-11-25 14:32:34.224282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.162 [2024-11-25 14:32:34.224438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.162 [2024-11-25 14:32:34.224445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.162 [2024-11-25 14:32:34.224452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.162 [2024-11-25 14:32:34.224458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.162 [2024-11-25 14:32:34.236078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.162 [2024-11-25 14:32:34.236677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.162 [2024-11-25 14:32:34.236710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.162 [2024-11-25 14:32:34.236719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.162 [2024-11-25 14:32:34.236887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.162 [2024-11-25 14:32:34.237041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.162 [2024-11-25 14:32:34.237048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.162 [2024-11-25 14:32:34.237054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.162 [2024-11-25 14:32:34.237060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.426 [2024-11-25 14:32:34.248701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.426 [2024-11-25 14:32:34.249340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.426 [2024-11-25 14:32:34.249371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.426 [2024-11-25 14:32:34.249380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.426 [2024-11-25 14:32:34.249548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.426 [2024-11-25 14:32:34.249702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.426 [2024-11-25 14:32:34.249709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.426 [2024-11-25 14:32:34.249716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.426 [2024-11-25 14:32:34.249722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.426 [2024-11-25 14:32:34.261368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.426 [2024-11-25 14:32:34.261936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.426 [2024-11-25 14:32:34.261967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.426 [2024-11-25 14:32:34.261976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.426 [2024-11-25 14:32:34.262144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.426 [2024-11-25 14:32:34.262310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.426 [2024-11-25 14:32:34.262317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.426 [2024-11-25 14:32:34.262323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.426 [2024-11-25 14:32:34.262329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.426 [2024-11-25 14:32:34.274104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.426 [2024-11-25 14:32:34.274573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.426 [2024-11-25 14:32:34.274603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.426 [2024-11-25 14:32:34.274612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.426 [2024-11-25 14:32:34.274779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.426 [2024-11-25 14:32:34.274933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.426 [2024-11-25 14:32:34.274940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.426 [2024-11-25 14:32:34.274947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.426 [2024-11-25 14:32:34.274953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.426 [2024-11-25 14:32:34.286748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.426 [2024-11-25 14:32:34.287424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.426 [2024-11-25 14:32:34.287455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.426 [2024-11-25 14:32:34.287463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.426 [2024-11-25 14:32:34.287629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.426 [2024-11-25 14:32:34.287783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.426 [2024-11-25 14:32:34.287789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.426 [2024-11-25 14:32:34.287795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.426 [2024-11-25 14:32:34.287801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.426 [2024-11-25 14:32:34.299436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.426 [2024-11-25 14:32:34.300024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.426 [2024-11-25 14:32:34.300053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.427 [2024-11-25 14:32:34.300062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.427 [2024-11-25 14:32:34.300239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.427 [2024-11-25 14:32:34.300393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.427 [2024-11-25 14:32:34.300400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.427 [2024-11-25 14:32:34.300409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.427 [2024-11-25 14:32:34.300415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.427 [2024-11-25 14:32:34.312193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.427 [2024-11-25 14:32:34.312749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.427 [2024-11-25 14:32:34.312779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.427 [2024-11-25 14:32:34.312788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.427 [2024-11-25 14:32:34.312954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.427 [2024-11-25 14:32:34.313115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.427 [2024-11-25 14:32:34.313123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.427 [2024-11-25 14:32:34.313128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.427 [2024-11-25 14:32:34.313135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.427 [2024-11-25 14:32:34.324917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.427 [2024-11-25 14:32:34.325470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.427 [2024-11-25 14:32:34.325500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.427 [2024-11-25 14:32:34.325509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.427 [2024-11-25 14:32:34.325675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.427 [2024-11-25 14:32:34.325829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.427 [2024-11-25 14:32:34.325837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.427 [2024-11-25 14:32:34.325842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.427 [2024-11-25 14:32:34.325848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.427 [2024-11-25 14:32:34.337631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.427 [2024-11-25 14:32:34.338219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.427 [2024-11-25 14:32:34.338251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.427 [2024-11-25 14:32:34.338260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.427 [2024-11-25 14:32:34.338426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.427 [2024-11-25 14:32:34.338580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.427 [2024-11-25 14:32:34.338586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.427 [2024-11-25 14:32:34.338592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.427 [2024-11-25 14:32:34.338597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.427 [2024-11-25 14:32:34.350371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.427 [2024-11-25 14:32:34.350944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.427 [2024-11-25 14:32:34.350974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.427 [2024-11-25 14:32:34.350983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.427 [2024-11-25 14:32:34.351149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.427 [2024-11-25 14:32:34.351309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.427 [2024-11-25 14:32:34.351317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.427 [2024-11-25 14:32:34.351323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.427 [2024-11-25 14:32:34.351329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.427 [2024-11-25 14:32:34.363025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.427 [2024-11-25 14:32:34.363497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.427 [2024-11-25 14:32:34.363513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.427 [2024-11-25 14:32:34.363518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.427 [2024-11-25 14:32:34.363670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.427 [2024-11-25 14:32:34.363820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.427 [2024-11-25 14:32:34.363826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.427 [2024-11-25 14:32:34.363831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.427 [2024-11-25 14:32:34.363836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.427 [2024-11-25 14:32:34.375741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.427 [2024-11-25 14:32:34.376225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.427 [2024-11-25 14:32:34.376238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.427 [2024-11-25 14:32:34.376243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.427 [2024-11-25 14:32:34.376394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.427 [2024-11-25 14:32:34.376545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.427 [2024-11-25 14:32:34.376550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.427 [2024-11-25 14:32:34.376555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.427 [2024-11-25 14:32:34.376561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.427 [2024-11-25 14:32:34.388454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.427 [2024-11-25 14:32:34.389013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.427 [2024-11-25 14:32:34.389044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.427 [2024-11-25 14:32:34.389058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.427 [2024-11-25 14:32:34.389233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.427 [2024-11-25 14:32:34.389389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.427 [2024-11-25 14:32:34.389397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.427 [2024-11-25 14:32:34.389405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.427 [2024-11-25 14:32:34.389412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.427 [2024-11-25 14:32:34.401175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.427 [2024-11-25 14:32:34.401710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.427 [2024-11-25 14:32:34.401740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.427 [2024-11-25 14:32:34.401749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.427 [2024-11-25 14:32:34.401915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.427 [2024-11-25 14:32:34.402069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.427 [2024-11-25 14:32:34.402075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.427 [2024-11-25 14:32:34.402081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.427 [2024-11-25 14:32:34.402087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.427 [2024-11-25 14:32:34.413857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.427 [2024-11-25 14:32:34.414358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.427 [2024-11-25 14:32:34.414373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.427 [2024-11-25 14:32:34.414379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.427 [2024-11-25 14:32:34.414531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.427 [2024-11-25 14:32:34.414681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.427 [2024-11-25 14:32:34.414687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.427 [2024-11-25 14:32:34.414692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.427 [2024-11-25 14:32:34.414697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.427 [2024-11-25 14:32:34.426595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.427 [2024-11-25 14:32:34.427085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.427 [2024-11-25 14:32:34.427098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.428 [2024-11-25 14:32:34.427103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.428 [2024-11-25 14:32:34.427258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.428 [2024-11-25 14:32:34.427413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.428 [2024-11-25 14:32:34.427418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.428 [2024-11-25 14:32:34.427423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.428 [2024-11-25 14:32:34.427428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.428 [2024-11-25 14:32:34.439320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.428 [2024-11-25 14:32:34.439840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.428 [2024-11-25 14:32:34.439871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.428 [2024-11-25 14:32:34.439879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.428 [2024-11-25 14:32:34.440046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.428 [2024-11-25 14:32:34.440205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.428 [2024-11-25 14:32:34.440212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.428 [2024-11-25 14:32:34.440218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.428 [2024-11-25 14:32:34.440223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.428 [2024-11-25 14:32:34.451977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.428 [2024-11-25 14:32:34.452558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.428 [2024-11-25 14:32:34.452589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.428 [2024-11-25 14:32:34.452597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.428 [2024-11-25 14:32:34.452764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.428 [2024-11-25 14:32:34.452918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.428 [2024-11-25 14:32:34.452924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.428 [2024-11-25 14:32:34.452929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.428 [2024-11-25 14:32:34.452935] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.428 [2024-11-25 14:32:34.464696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.428 [2024-11-25 14:32:34.465204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.428 [2024-11-25 14:32:34.465226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.428 [2024-11-25 14:32:34.465233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.428 [2024-11-25 14:32:34.465390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.428 [2024-11-25 14:32:34.465542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.428 [2024-11-25 14:32:34.465548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.428 [2024-11-25 14:32:34.465557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.428 [2024-11-25 14:32:34.465562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.428 [2024-11-25 14:32:34.477332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.428 [2024-11-25 14:32:34.477938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.428 [2024-11-25 14:32:34.477968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.428 [2024-11-25 14:32:34.477977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.428 [2024-11-25 14:32:34.478144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.428 [2024-11-25 14:32:34.478303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.428 [2024-11-25 14:32:34.478310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.428 [2024-11-25 14:32:34.478316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.428 [2024-11-25 14:32:34.478322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.428 [2024-11-25 14:32:34.490077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.428 [2024-11-25 14:32:34.490646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.428 [2024-11-25 14:32:34.490676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.428 [2024-11-25 14:32:34.490685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.428 [2024-11-25 14:32:34.490851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.428 [2024-11-25 14:32:34.491005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.428 [2024-11-25 14:32:34.491011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.428 [2024-11-25 14:32:34.491017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.428 [2024-11-25 14:32:34.491022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.428 [2024-11-25 14:32:34.502783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.428 [2024-11-25 14:32:34.503255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.428 [2024-11-25 14:32:34.503270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.428 [2024-11-25 14:32:34.503276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.428 [2024-11-25 14:32:34.503427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.428 [2024-11-25 14:32:34.503577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.428 [2024-11-25 14:32:34.503583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.428 [2024-11-25 14:32:34.503588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.428 [2024-11-25 14:32:34.503593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.692 [2024-11-25 14:32:34.515496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.692 [2024-11-25 14:32:34.515993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-11-25 14:32:34.516006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.692 [2024-11-25 14:32:34.516012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.692 [2024-11-25 14:32:34.516167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.692 [2024-11-25 14:32:34.516319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.692 [2024-11-25 14:32:34.516324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.692 [2024-11-25 14:32:34.516329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.692 [2024-11-25 14:32:34.516334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.692 [2024-11-25 14:32:34.528229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.692 [2024-11-25 14:32:34.528705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-11-25 14:32:34.528718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.692 [2024-11-25 14:32:34.528723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.692 [2024-11-25 14:32:34.528873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.692 [2024-11-25 14:32:34.529023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.692 [2024-11-25 14:32:34.529029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.692 [2024-11-25 14:32:34.529034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.692 [2024-11-25 14:32:34.529039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.692 [2024-11-25 14:32:34.540932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.692 [2024-11-25 14:32:34.541410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-11-25 14:32:34.541422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.692 [2024-11-25 14:32:34.541427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.692 [2024-11-25 14:32:34.541578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.692 [2024-11-25 14:32:34.541728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.692 [2024-11-25 14:32:34.541734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.692 [2024-11-25 14:32:34.541739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.692 [2024-11-25 14:32:34.541743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.692 [2024-11-25 14:32:34.553640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.692 [2024-11-25 14:32:34.554206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-11-25 14:32:34.554237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.692 [2024-11-25 14:32:34.554249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.692 [2024-11-25 14:32:34.554418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.692 [2024-11-25 14:32:34.554572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.692 [2024-11-25 14:32:34.554578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.692 [2024-11-25 14:32:34.554583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.692 [2024-11-25 14:32:34.554590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.692 [2024-11-25 14:32:34.566366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.692 [2024-11-25 14:32:34.566941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-11-25 14:32:34.566972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.693 [2024-11-25 14:32:34.566980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.693 [2024-11-25 14:32:34.567147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.693 [2024-11-25 14:32:34.567307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.693 [2024-11-25 14:32:34.567315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.693 [2024-11-25 14:32:34.567320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.693 [2024-11-25 14:32:34.567326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.693 [2024-11-25 14:32:34.579085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.693 [2024-11-25 14:32:34.579629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-11-25 14:32:34.579659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.693 [2024-11-25 14:32:34.579668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.693 [2024-11-25 14:32:34.579836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.693 [2024-11-25 14:32:34.579990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.693 [2024-11-25 14:32:34.579997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.693 [2024-11-25 14:32:34.580003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.693 [2024-11-25 14:32:34.580009] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.693 [2024-11-25 14:32:34.591773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.693 [2024-11-25 14:32:34.592388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-11-25 14:32:34.592419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.693 [2024-11-25 14:32:34.592428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.693 [2024-11-25 14:32:34.592594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.693 [2024-11-25 14:32:34.592752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.693 [2024-11-25 14:32:34.592759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.693 [2024-11-25 14:32:34.592765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.693 [2024-11-25 14:32:34.592770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.693 [2024-11-25 14:32:34.604393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.693 [2024-11-25 14:32:34.604955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-11-25 14:32:34.604985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.693 [2024-11-25 14:32:34.604993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.693 [2024-11-25 14:32:34.605168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.693 [2024-11-25 14:32:34.605322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.693 [2024-11-25 14:32:34.605329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.693 [2024-11-25 14:32:34.605334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.693 [2024-11-25 14:32:34.605341] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.693 [2024-11-25 14:32:34.617098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.693 [2024-11-25 14:32:34.617449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-11-25 14:32:34.617465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.693 [2024-11-25 14:32:34.617471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.693 [2024-11-25 14:32:34.617622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.693 [2024-11-25 14:32:34.617773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.693 [2024-11-25 14:32:34.617779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.693 [2024-11-25 14:32:34.617784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.693 [2024-11-25 14:32:34.617789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.693 [2024-11-25 14:32:34.629829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.693 [2024-11-25 14:32:34.630383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-11-25 14:32:34.630414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.693 [2024-11-25 14:32:34.630422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.693 [2024-11-25 14:32:34.630589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.693 [2024-11-25 14:32:34.630743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.693 [2024-11-25 14:32:34.630749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.693 [2024-11-25 14:32:34.630754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.693 [2024-11-25 14:32:34.630763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.693 [2024-11-25 14:32:34.642537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.693 [2024-11-25 14:32:34.643105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-11-25 14:32:34.643135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.693 [2024-11-25 14:32:34.643144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.693 [2024-11-25 14:32:34.643321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.693 [2024-11-25 14:32:34.643475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.693 [2024-11-25 14:32:34.643482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.693 [2024-11-25 14:32:34.643487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.693 [2024-11-25 14:32:34.643493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.693 [2024-11-25 14:32:34.655290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.693 [2024-11-25 14:32:34.655835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-11-25 14:32:34.655865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.693 [2024-11-25 14:32:34.655873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.693 [2024-11-25 14:32:34.656040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.693 [2024-11-25 14:32:34.656201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.693 [2024-11-25 14:32:34.656208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.693 [2024-11-25 14:32:34.656214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.693 [2024-11-25 14:32:34.656219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.693 [2024-11-25 14:32:34.667997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.693 [2024-11-25 14:32:34.668510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-11-25 14:32:34.668526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.693 [2024-11-25 14:32:34.668531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.693 [2024-11-25 14:32:34.668682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.693 [2024-11-25 14:32:34.668833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.693 [2024-11-25 14:32:34.668839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.693 [2024-11-25 14:32:34.668844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.693 [2024-11-25 14:32:34.668848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.693 [2024-11-25 14:32:34.680614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.693 [2024-11-25 14:32:34.681102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-11-25 14:32:34.681115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.693 [2024-11-25 14:32:34.681120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.693 [2024-11-25 14:32:34.681276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.693 [2024-11-25 14:32:34.681427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.693 [2024-11-25 14:32:34.681433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.693 [2024-11-25 14:32:34.681437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.693 [2024-11-25 14:32:34.681442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.693 [2024-11-25 14:32:34.693349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.693 [2024-11-25 14:32:34.693829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-11-25 14:32:34.693841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.694 [2024-11-25 14:32:34.693847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.694 [2024-11-25 14:32:34.693997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.694 [2024-11-25 14:32:34.694147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.694 [2024-11-25 14:32:34.694153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.694 [2024-11-25 14:32:34.694161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.694 [2024-11-25 14:32:34.694166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.694 [2024-11-25 14:32:34.706073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.694 [2024-11-25 14:32:34.706552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-11-25 14:32:34.706565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.694 [2024-11-25 14:32:34.706570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.694 [2024-11-25 14:32:34.706720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.694 [2024-11-25 14:32:34.706871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.694 [2024-11-25 14:32:34.706877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.694 [2024-11-25 14:32:34.706881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.694 [2024-11-25 14:32:34.706886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.694 [2024-11-25 14:32:34.718800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.694 [2024-11-25 14:32:34.719357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-11-25 14:32:34.719387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.694 [2024-11-25 14:32:34.719395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.694 [2024-11-25 14:32:34.719566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.694 [2024-11-25 14:32:34.719719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.694 [2024-11-25 14:32:34.719725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.694 [2024-11-25 14:32:34.719731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.694 [2024-11-25 14:32:34.719737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.694 [2024-11-25 14:32:34.731517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.694 [2024-11-25 14:32:34.732072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-11-25 14:32:34.732102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.694 [2024-11-25 14:32:34.732111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.694 [2024-11-25 14:32:34.732287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.694 [2024-11-25 14:32:34.732442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.694 [2024-11-25 14:32:34.732448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.694 [2024-11-25 14:32:34.732454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.694 [2024-11-25 14:32:34.732460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.694 [2024-11-25 14:32:34.744234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.694 [2024-11-25 14:32:34.744715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-11-25 14:32:34.744730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.694 [2024-11-25 14:32:34.744736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.694 [2024-11-25 14:32:34.744886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.694 [2024-11-25 14:32:34.745037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.694 [2024-11-25 14:32:34.745044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.694 [2024-11-25 14:32:34.745049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.694 [2024-11-25 14:32:34.745054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.694 [2024-11-25 14:32:34.756964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.694 [2024-11-25 14:32:34.757513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-11-25 14:32:34.757543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.694 [2024-11-25 14:32:34.757552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.694 [2024-11-25 14:32:34.757718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.694 [2024-11-25 14:32:34.757872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.694 [2024-11-25 14:32:34.757882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.694 [2024-11-25 14:32:34.757887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.694 [2024-11-25 14:32:34.757893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.694 [2024-11-25 14:32:34.769666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.694 [2024-11-25 14:32:34.770254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-11-25 14:32:34.770285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.694 [2024-11-25 14:32:34.770294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.694 [2024-11-25 14:32:34.770461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.694 [2024-11-25 14:32:34.770615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.694 [2024-11-25 14:32:34.770622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.694 [2024-11-25 14:32:34.770627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.694 [2024-11-25 14:32:34.770633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.957 [2024-11-25 14:32:34.782301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.957 [2024-11-25 14:32:34.782877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.957 [2024-11-25 14:32:34.782907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.957 [2024-11-25 14:32:34.782916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.957 [2024-11-25 14:32:34.783082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.957 [2024-11-25 14:32:34.783242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.957 [2024-11-25 14:32:34.783249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.957 [2024-11-25 14:32:34.783255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.957 [2024-11-25 14:32:34.783261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.957 [2024-11-25 14:32:34.795031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.957 [2024-11-25 14:32:34.795385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.957 [2024-11-25 14:32:34.795400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.957 [2024-11-25 14:32:34.795406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.957 [2024-11-25 14:32:34.795557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.957 [2024-11-25 14:32:34.795707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.957 [2024-11-25 14:32:34.795713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.957 [2024-11-25 14:32:34.795718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.957 [2024-11-25 14:32:34.795727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.957 [2024-11-25 14:32:34.807780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.957 [2024-11-25 14:32:34.808254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.957 [2024-11-25 14:32:34.808267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.957 [2024-11-25 14:32:34.808273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.957 [2024-11-25 14:32:34.808423] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.957 [2024-11-25 14:32:34.808574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.957 [2024-11-25 14:32:34.808579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.957 [2024-11-25 14:32:34.808584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.957 [2024-11-25 14:32:34.808589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.957 6655.00 IOPS, 26.00 MiB/s [2024-11-25T13:32:35.047Z] [2024-11-25 14:32:34.820491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.957 [2024-11-25 14:32:34.820989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.957 [2024-11-25 14:32:34.821002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.957 [2024-11-25 14:32:34.821007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.958 [2024-11-25 14:32:34.821162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.958 [2024-11-25 14:32:34.821313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.958 [2024-11-25 14:32:34.821319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.958 [2024-11-25 14:32:34.821324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.958 [2024-11-25 14:32:34.821328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.958 [2024-11-25 14:32:34.833234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.958 [2024-11-25 14:32:34.833698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.958 [2024-11-25 14:32:34.833710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.958 [2024-11-25 14:32:34.833715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.958 [2024-11-25 14:32:34.833865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.958 [2024-11-25 14:32:34.834016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.958 [2024-11-25 14:32:34.834022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.958 [2024-11-25 14:32:34.834027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.958 [2024-11-25 14:32:34.834031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.958 [2024-11-25 14:32:34.845940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.958 [2024-11-25 14:32:34.846512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.958 [2024-11-25 14:32:34.846542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.958 [2024-11-25 14:32:34.846551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.958 [2024-11-25 14:32:34.846719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.958 [2024-11-25 14:32:34.846873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.958 [2024-11-25 14:32:34.846879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.958 [2024-11-25 14:32:34.846885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.958 [2024-11-25 14:32:34.846891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.958 [2024-11-25 14:32:34.858654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.958 [2024-11-25 14:32:34.859161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.958 [2024-11-25 14:32:34.859175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.958 [2024-11-25 14:32:34.859182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.958 [2024-11-25 14:32:34.859333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.958 [2024-11-25 14:32:34.859484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.958 [2024-11-25 14:32:34.859490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.958 [2024-11-25 14:32:34.859495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.958 [2024-11-25 14:32:34.859500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.958 [2024-11-25 14:32:34.871263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.958 [2024-11-25 14:32:34.871823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.958 [2024-11-25 14:32:34.871854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.958 [2024-11-25 14:32:34.871862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.958 [2024-11-25 14:32:34.872029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.958 [2024-11-25 14:32:34.872191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.958 [2024-11-25 14:32:34.872199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.958 [2024-11-25 14:32:34.872204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.958 [2024-11-25 14:32:34.872210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.958 [2024-11-25 14:32:34.883973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.958 [2024-11-25 14:32:34.884547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.958 [2024-11-25 14:32:34.884578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.958 [2024-11-25 14:32:34.884586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.958 [2024-11-25 14:32:34.884757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.958 [2024-11-25 14:32:34.884912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.958 [2024-11-25 14:32:34.884918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.958 [2024-11-25 14:32:34.884923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.958 [2024-11-25 14:32:34.884929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.958 [2024-11-25 14:32:34.896686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.958 [2024-11-25 14:32:34.897168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.958 [2024-11-25 14:32:34.897184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.958 [2024-11-25 14:32:34.897189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.958 [2024-11-25 14:32:34.897340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.958 [2024-11-25 14:32:34.897491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.958 [2024-11-25 14:32:34.897497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.958 [2024-11-25 14:32:34.897502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.958 [2024-11-25 14:32:34.897506] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.958 [2024-11-25 14:32:34.909399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.958 [2024-11-25 14:32:34.909858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.958 [2024-11-25 14:32:34.909870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.958 [2024-11-25 14:32:34.909875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.958 [2024-11-25 14:32:34.910025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.958 [2024-11-25 14:32:34.910181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.958 [2024-11-25 14:32:34.910188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.958 [2024-11-25 14:32:34.910193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.958 [2024-11-25 14:32:34.910198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.958 [2024-11-25 14:32:34.922093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.958 [2024-11-25 14:32:34.922709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.958 [2024-11-25 14:32:34.922740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.958 [2024-11-25 14:32:34.922748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.958 [2024-11-25 14:32:34.922915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.958 [2024-11-25 14:32:34.923069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.958 [2024-11-25 14:32:34.923078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.958 [2024-11-25 14:32:34.923084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.958 [2024-11-25 14:32:34.923090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.958 [2024-11-25 14:32:34.934709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.958 [2024-11-25 14:32:34.935278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.958 [2024-11-25 14:32:34.935308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.958 [2024-11-25 14:32:34.935317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.958 [2024-11-25 14:32:34.935486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.958 [2024-11-25 14:32:34.935639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.958 [2024-11-25 14:32:34.935646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.958 [2024-11-25 14:32:34.935651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.958 [2024-11-25 14:32:34.935657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.958 [2024-11-25 14:32:34.947425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.958 [2024-11-25 14:32:34.947996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.958 [2024-11-25 14:32:34.948027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.958 [2024-11-25 14:32:34.948035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.959 [2024-11-25 14:32:34.948207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.959 [2024-11-25 14:32:34.948361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.959 [2024-11-25 14:32:34.948368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.959 [2024-11-25 14:32:34.948373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.959 [2024-11-25 14:32:34.948379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.959 [2024-11-25 14:32:34.960129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.959 [2024-11-25 14:32:34.960704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.959 [2024-11-25 14:32:34.960734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.959 [2024-11-25 14:32:34.960743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.959 [2024-11-25 14:32:34.960909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.959 [2024-11-25 14:32:34.961063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.959 [2024-11-25 14:32:34.961069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.959 [2024-11-25 14:32:34.961075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.959 [2024-11-25 14:32:34.961087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.959 [2024-11-25 14:32:34.972853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.959 [2024-11-25 14:32:34.973434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.959 [2024-11-25 14:32:34.973465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.959 [2024-11-25 14:32:34.973473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.959 [2024-11-25 14:32:34.973639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.959 [2024-11-25 14:32:34.973793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.959 [2024-11-25 14:32:34.973800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.959 [2024-11-25 14:32:34.973805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.959 [2024-11-25 14:32:34.973811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.959 [2024-11-25 14:32:34.985573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.959 [2024-11-25 14:32:34.986188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.959 [2024-11-25 14:32:34.986218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.959 [2024-11-25 14:32:34.986226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.959 [2024-11-25 14:32:34.986395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.959 [2024-11-25 14:32:34.986549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.959 [2024-11-25 14:32:34.986555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.959 [2024-11-25 14:32:34.986561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.959 [2024-11-25 14:32:34.986566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.959 [2024-11-25 14:32:34.998182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.959 [2024-11-25 14:32:34.998751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.959 [2024-11-25 14:32:34.998781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.959 [2024-11-25 14:32:34.998789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.959 [2024-11-25 14:32:34.998956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.959 [2024-11-25 14:32:34.999109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.959 [2024-11-25 14:32:34.999115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.959 [2024-11-25 14:32:34.999121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.959 [2024-11-25 14:32:34.999126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.959 [2024-11-25 14:32:35.010887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.959 [2024-11-25 14:32:35.011466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.959 [2024-11-25 14:32:35.011496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.959 [2024-11-25 14:32:35.011504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.959 [2024-11-25 14:32:35.011670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.959 [2024-11-25 14:32:35.011824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.959 [2024-11-25 14:32:35.011831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.959 [2024-11-25 14:32:35.011837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.959 [2024-11-25 14:32:35.011842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.959 [2024-11-25 14:32:35.023610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.959 [2024-11-25 14:32:35.024223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.959 [2024-11-25 14:32:35.024254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.959 [2024-11-25 14:32:35.024262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.959 [2024-11-25 14:32:35.024429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.959 [2024-11-25 14:32:35.024583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.959 [2024-11-25 14:32:35.024590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.959 [2024-11-25 14:32:35.024595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.959 [2024-11-25 14:32:35.024601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:29.959 [2024-11-25 14:32:35.036356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:29.959 [2024-11-25 14:32:35.036907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.959 [2024-11-25 14:32:35.036937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:29.959 [2024-11-25 14:32:35.036946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:29.959 [2024-11-25 14:32:35.037112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:29.959 [2024-11-25 14:32:35.037274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:29.959 [2024-11-25 14:32:35.037282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:29.959 [2024-11-25 14:32:35.037287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:29.959 [2024-11-25 14:32:35.037293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.222 [2024-11-25 14:32:35.049049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.222 [2024-11-25 14:32:35.049649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.222 [2024-11-25 14:32:35.049680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.222 [2024-11-25 14:32:35.049689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.222 [2024-11-25 14:32:35.049859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.222 [2024-11-25 14:32:35.050013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.222 [2024-11-25 14:32:35.050019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.222 [2024-11-25 14:32:35.050024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.222 [2024-11-25 14:32:35.050030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.222 [2024-11-25 14:32:35.061786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.222 [2024-11-25 14:32:35.062284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.222 [2024-11-25 14:32:35.062314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.222 [2024-11-25 14:32:35.062323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.222 [2024-11-25 14:32:35.062491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.222 [2024-11-25 14:32:35.062645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.222 [2024-11-25 14:32:35.062651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.222 [2024-11-25 14:32:35.062656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.222 [2024-11-25 14:32:35.062662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.222 [2024-11-25 14:32:35.074429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.222 [2024-11-25 14:32:35.075044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.222 [2024-11-25 14:32:35.075074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.222 [2024-11-25 14:32:35.075082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.222 [2024-11-25 14:32:35.075256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.222 [2024-11-25 14:32:35.075410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.222 [2024-11-25 14:32:35.075416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.222 [2024-11-25 14:32:35.075422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.222 [2024-11-25 14:32:35.075428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.222 [2024-11-25 14:32:35.087196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.222 [2024-11-25 14:32:35.087778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.222 [2024-11-25 14:32:35.087807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.222 [2024-11-25 14:32:35.087816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.222 [2024-11-25 14:32:35.087982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.222 [2024-11-25 14:32:35.088136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.222 [2024-11-25 14:32:35.088146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.222 [2024-11-25 14:32:35.088152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.222 [2024-11-25 14:32:35.088165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.222 [2024-11-25 14:32:35.099917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.222 [2024-11-25 14:32:35.100575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.222 [2024-11-25 14:32:35.100605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.222 [2024-11-25 14:32:35.100614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.222 [2024-11-25 14:32:35.100780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.222 [2024-11-25 14:32:35.100934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.223 [2024-11-25 14:32:35.100940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.223 [2024-11-25 14:32:35.100946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.223 [2024-11-25 14:32:35.100951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.223 [2024-11-25 14:32:35.112575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.223 [2024-11-25 14:32:35.113176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.223 [2024-11-25 14:32:35.113206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.223 [2024-11-25 14:32:35.113214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.223 [2024-11-25 14:32:35.113380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.223 [2024-11-25 14:32:35.113534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.223 [2024-11-25 14:32:35.113541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.223 [2024-11-25 14:32:35.113546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.223 [2024-11-25 14:32:35.113551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.223 [2024-11-25 14:32:35.125319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.223 [2024-11-25 14:32:35.125890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.223 [2024-11-25 14:32:35.125920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.223 [2024-11-25 14:32:35.125929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.223 [2024-11-25 14:32:35.126096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.223 [2024-11-25 14:32:35.126255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.223 [2024-11-25 14:32:35.126262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.223 [2024-11-25 14:32:35.126268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.223 [2024-11-25 14:32:35.126277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.223 [2024-11-25 14:32:35.138025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.223 [2024-11-25 14:32:35.138607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.223 [2024-11-25 14:32:35.138637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.223 [2024-11-25 14:32:35.138645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.223 [2024-11-25 14:32:35.138812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.223 [2024-11-25 14:32:35.138965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.223 [2024-11-25 14:32:35.138972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.223 [2024-11-25 14:32:35.138977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.223 [2024-11-25 14:32:35.138983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.223 [2024-11-25 14:32:35.150741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.223 [2024-11-25 14:32:35.151211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.223 [2024-11-25 14:32:35.151242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.223 [2024-11-25 14:32:35.151251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.223 [2024-11-25 14:32:35.151419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.223 [2024-11-25 14:32:35.151573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.223 [2024-11-25 14:32:35.151579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.223 [2024-11-25 14:32:35.151584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.223 [2024-11-25 14:32:35.151590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.223 [2024-11-25 14:32:35.163352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.223 [2024-11-25 14:32:35.163939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.223 [2024-11-25 14:32:35.163969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.223 [2024-11-25 14:32:35.163977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.223 [2024-11-25 14:32:35.164143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.223 [2024-11-25 14:32:35.164306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.223 [2024-11-25 14:32:35.164313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.223 [2024-11-25 14:32:35.164318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.223 [2024-11-25 14:32:35.164324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.223 [2024-11-25 14:32:35.176084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.223 [2024-11-25 14:32:35.176668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.223 [2024-11-25 14:32:35.176702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.223 [2024-11-25 14:32:35.176711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.223 [2024-11-25 14:32:35.176877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.223 [2024-11-25 14:32:35.177031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.223 [2024-11-25 14:32:35.177037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.223 [2024-11-25 14:32:35.177042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.223 [2024-11-25 14:32:35.177048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.223 [2024-11-25 14:32:35.188807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.223 [2024-11-25 14:32:35.189389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.223 [2024-11-25 14:32:35.189419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.223 [2024-11-25 14:32:35.189427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.223 [2024-11-25 14:32:35.189593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.223 [2024-11-25 14:32:35.189747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.223 [2024-11-25 14:32:35.189753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.223 [2024-11-25 14:32:35.189759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.223 [2024-11-25 14:32:35.189764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.223 [2024-11-25 14:32:35.201521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.223 [2024-11-25 14:32:35.202091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.223 [2024-11-25 14:32:35.202121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.223 [2024-11-25 14:32:35.202130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.223 [2024-11-25 14:32:35.202304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.223 [2024-11-25 14:32:35.202458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.223 [2024-11-25 14:32:35.202464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.223 [2024-11-25 14:32:35.202470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.223 [2024-11-25 14:32:35.202475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.223 [2024-11-25 14:32:35.214231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.223 [2024-11-25 14:32:35.214801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.223 [2024-11-25 14:32:35.214831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.223 [2024-11-25 14:32:35.214839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.223 [2024-11-25 14:32:35.215009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.223 [2024-11-25 14:32:35.215170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.223 [2024-11-25 14:32:35.215177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.223 [2024-11-25 14:32:35.215183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.223 [2024-11-25 14:32:35.215189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.223 [2024-11-25 14:32:35.226944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.223 [2024-11-25 14:32:35.227523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.223 [2024-11-25 14:32:35.227553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.223 [2024-11-25 14:32:35.227562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.223 [2024-11-25 14:32:35.227731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.223 [2024-11-25 14:32:35.227884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.224 [2024-11-25 14:32:35.227890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.224 [2024-11-25 14:32:35.227896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.224 [2024-11-25 14:32:35.227901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.224 [2024-11-25 14:32:35.239659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.224 [2024-11-25 14:32:35.240198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.224 [2024-11-25 14:32:35.240228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.224 [2024-11-25 14:32:35.240237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.224 [2024-11-25 14:32:35.240406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.224 [2024-11-25 14:32:35.240559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.224 [2024-11-25 14:32:35.240566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.224 [2024-11-25 14:32:35.240572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.224 [2024-11-25 14:32:35.240577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.224 [2024-11-25 14:32:35.252336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.224 [2024-11-25 14:32:35.252786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.224 [2024-11-25 14:32:35.252816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.224 [2024-11-25 14:32:35.252825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.224 [2024-11-25 14:32:35.252991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.224 [2024-11-25 14:32:35.253145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.224 [2024-11-25 14:32:35.253154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.224 [2024-11-25 14:32:35.253168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.224 [2024-11-25 14:32:35.253175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.224 [2024-11-25 14:32:35.265074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.224 [2024-11-25 14:32:35.265674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.224 [2024-11-25 14:32:35.265705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.224 [2024-11-25 14:32:35.265713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.224 [2024-11-25 14:32:35.265882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.224 [2024-11-25 14:32:35.266036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.224 [2024-11-25 14:32:35.266042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.224 [2024-11-25 14:32:35.266048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.224 [2024-11-25 14:32:35.266054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.224 [2024-11-25 14:32:35.277825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.224 [2024-11-25 14:32:35.278367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.224 [2024-11-25 14:32:35.278397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.224 [2024-11-25 14:32:35.278406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.224 [2024-11-25 14:32:35.278575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.224 [2024-11-25 14:32:35.278729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.224 [2024-11-25 14:32:35.278735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.224 [2024-11-25 14:32:35.278741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.224 [2024-11-25 14:32:35.278748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.224 [2024-11-25 14:32:35.290507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.224 [2024-11-25 14:32:35.291085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.224 [2024-11-25 14:32:35.291115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.224 [2024-11-25 14:32:35.291124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.224 [2024-11-25 14:32:35.291298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.224 [2024-11-25 14:32:35.291452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.224 [2024-11-25 14:32:35.291458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.224 [2024-11-25 14:32:35.291464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.224 [2024-11-25 14:32:35.291470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.224 [2024-11-25 14:32:35.303224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.224 [2024-11-25 14:32:35.303793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.224 [2024-11-25 14:32:35.303824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.224 [2024-11-25 14:32:35.303832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.224 [2024-11-25 14:32:35.303999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.224 [2024-11-25 14:32:35.304153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.224 [2024-11-25 14:32:35.304166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.224 [2024-11-25 14:32:35.304171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.224 [2024-11-25 14:32:35.304177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.488 [2024-11-25 14:32:35.315935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.488 [2024-11-25 14:32:35.316500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.488 [2024-11-25 14:32:35.316530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.488 [2024-11-25 14:32:35.316539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.488 [2024-11-25 14:32:35.316713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.488 [2024-11-25 14:32:35.316868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.488 [2024-11-25 14:32:35.316874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.488 [2024-11-25 14:32:35.316879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.488 [2024-11-25 14:32:35.316885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.488 [2024-11-25 14:32:35.328642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.488 [2024-11-25 14:32:35.329098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.488 [2024-11-25 14:32:35.329113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.488 [2024-11-25 14:32:35.329119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.488 [2024-11-25 14:32:35.329273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.488 [2024-11-25 14:32:35.329424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.488 [2024-11-25 14:32:35.329430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.488 [2024-11-25 14:32:35.329435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.488 [2024-11-25 14:32:35.329439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.488 [2024-11-25 14:32:35.341334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.488 [2024-11-25 14:32:35.341951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.488 [2024-11-25 14:32:35.341985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.488 [2024-11-25 14:32:35.341994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.488 [2024-11-25 14:32:35.342166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.489 [2024-11-25 14:32:35.342320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.489 [2024-11-25 14:32:35.342327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.489 [2024-11-25 14:32:35.342332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.489 [2024-11-25 14:32:35.342338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.489 [2024-11-25 14:32:35.353947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.489 [2024-11-25 14:32:35.354566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.489 [2024-11-25 14:32:35.354596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.489 [2024-11-25 14:32:35.354605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.489 [2024-11-25 14:32:35.354774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.489 [2024-11-25 14:32:35.354928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.489 [2024-11-25 14:32:35.354934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.489 [2024-11-25 14:32:35.354940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.489 [2024-11-25 14:32:35.354946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.489 [2024-11-25 14:32:35.366569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.489 [2024-11-25 14:32:35.367121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.489 [2024-11-25 14:32:35.367151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.489 [2024-11-25 14:32:35.367172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.489 [2024-11-25 14:32:35.367340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.489 [2024-11-25 14:32:35.367495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.489 [2024-11-25 14:32:35.367503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.489 [2024-11-25 14:32:35.367510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.489 [2024-11-25 14:32:35.367517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.489 [2024-11-25 14:32:35.379271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.489 [2024-11-25 14:32:35.379741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.489 [2024-11-25 14:32:35.379771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.489 [2024-11-25 14:32:35.379779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.489 [2024-11-25 14:32:35.379949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.489 [2024-11-25 14:32:35.380103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.489 [2024-11-25 14:32:35.380109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.489 [2024-11-25 14:32:35.380114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.489 [2024-11-25 14:32:35.380120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.489 [2024-11-25 14:32:35.391943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.489 [2024-11-25 14:32:35.392503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.489 [2024-11-25 14:32:35.392532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.489 [2024-11-25 14:32:35.392541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.489 [2024-11-25 14:32:35.392707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.489 [2024-11-25 14:32:35.392860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.489 [2024-11-25 14:32:35.392866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.489 [2024-11-25 14:32:35.392872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.489 [2024-11-25 14:32:35.392878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.489 [2024-11-25 14:32:35.404634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.489 [2024-11-25 14:32:35.405203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.489 [2024-11-25 14:32:35.405234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.489 [2024-11-25 14:32:35.405242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.489 [2024-11-25 14:32:35.405409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.489 [2024-11-25 14:32:35.405562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.489 [2024-11-25 14:32:35.405568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.489 [2024-11-25 14:32:35.405574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.489 [2024-11-25 14:32:35.405579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.489 [2024-11-25 14:32:35.417342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.489 [2024-11-25 14:32:35.417910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.489 [2024-11-25 14:32:35.417940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.489 [2024-11-25 14:32:35.417949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.489 [2024-11-25 14:32:35.418115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.489 [2024-11-25 14:32:35.418277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.489 [2024-11-25 14:32:35.418284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.489 [2024-11-25 14:32:35.418293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.489 [2024-11-25 14:32:35.418299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.489 [2024-11-25 14:32:35.430045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.489 [2024-11-25 14:32:35.430593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.489 [2024-11-25 14:32:35.430623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.489 [2024-11-25 14:32:35.430632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.489 [2024-11-25 14:32:35.430798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.489 [2024-11-25 14:32:35.430952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.489 [2024-11-25 14:32:35.430959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.489 [2024-11-25 14:32:35.430964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.489 [2024-11-25 14:32:35.430969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.489 [2024-11-25 14:32:35.442725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.489 [2024-11-25 14:32:35.443258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.489 [2024-11-25 14:32:35.443287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.489 [2024-11-25 14:32:35.443296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.489 [2024-11-25 14:32:35.443465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.489 [2024-11-25 14:32:35.443618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.489 [2024-11-25 14:32:35.443625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.489 [2024-11-25 14:32:35.443631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.489 [2024-11-25 14:32:35.443636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.489 [2024-11-25 14:32:35.455391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.489 [2024-11-25 14:32:35.455984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.489 [2024-11-25 14:32:35.456014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.490 [2024-11-25 14:32:35.456022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.490 [2024-11-25 14:32:35.456195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.490 [2024-11-25 14:32:35.456349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.490 [2024-11-25 14:32:35.456356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.490 [2024-11-25 14:32:35.456362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.490 [2024-11-25 14:32:35.456367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.490 [2024-11-25 14:32:35.468126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.490 [2024-11-25 14:32:35.468700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.490 [2024-11-25 14:32:35.468730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.490 [2024-11-25 14:32:35.468739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.490 [2024-11-25 14:32:35.468905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.490 [2024-11-25 14:32:35.469058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.490 [2024-11-25 14:32:35.469065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.490 [2024-11-25 14:32:35.469070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.490 [2024-11-25 14:32:35.469076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.490 [2024-11-25 14:32:35.480834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.490 [2024-11-25 14:32:35.481312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.490 [2024-11-25 14:32:35.481327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.490 [2024-11-25 14:32:35.481333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.490 [2024-11-25 14:32:35.481484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.490 [2024-11-25 14:32:35.481634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.490 [2024-11-25 14:32:35.481640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.490 [2024-11-25 14:32:35.481644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.490 [2024-11-25 14:32:35.481649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.490 [2024-11-25 14:32:35.493539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.490 [2024-11-25 14:32:35.494025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.490 [2024-11-25 14:32:35.494037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.490 [2024-11-25 14:32:35.494042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.490 [2024-11-25 14:32:35.494196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.490 [2024-11-25 14:32:35.494347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.490 [2024-11-25 14:32:35.494353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.490 [2024-11-25 14:32:35.494358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.490 [2024-11-25 14:32:35.494362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.490 [2024-11-25 14:32:35.506253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.490 [2024-11-25 14:32:35.506735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.490 [2024-11-25 14:32:35.506747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.490 [2024-11-25 14:32:35.506756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.490 [2024-11-25 14:32:35.506906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.490 [2024-11-25 14:32:35.507057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.490 [2024-11-25 14:32:35.507062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.490 [2024-11-25 14:32:35.507067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.490 [2024-11-25 14:32:35.507072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.490 [2024-11-25 14:32:35.518975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.490 [2024-11-25 14:32:35.519486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.490 [2024-11-25 14:32:35.519516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.490 [2024-11-25 14:32:35.519525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.490 [2024-11-25 14:32:35.519693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.490 [2024-11-25 14:32:35.519846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.490 [2024-11-25 14:32:35.519853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.490 [2024-11-25 14:32:35.519858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.490 [2024-11-25 14:32:35.519864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.490 [2024-11-25 14:32:35.531624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.490 [2024-11-25 14:32:35.532237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.490 [2024-11-25 14:32:35.532267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.490 [2024-11-25 14:32:35.532276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.490 [2024-11-25 14:32:35.532445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.490 [2024-11-25 14:32:35.532598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.490 [2024-11-25 14:32:35.532604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.490 [2024-11-25 14:32:35.532610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.490 [2024-11-25 14:32:35.532615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.490 [2024-11-25 14:32:35.544236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.490 [2024-11-25 14:32:35.544809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.490 [2024-11-25 14:32:35.544838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.490 [2024-11-25 14:32:35.544847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.490 [2024-11-25 14:32:35.545013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.490 [2024-11-25 14:32:35.545179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.490 [2024-11-25 14:32:35.545187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.490 [2024-11-25 14:32:35.545192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.490 [2024-11-25 14:32:35.545198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.490 [2024-11-25 14:32:35.556946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.490 [2024-11-25 14:32:35.557489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.490 [2024-11-25 14:32:35.557519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.490 [2024-11-25 14:32:35.557528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.490 [2024-11-25 14:32:35.557694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.490 [2024-11-25 14:32:35.557847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.490 [2024-11-25 14:32:35.557853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.490 [2024-11-25 14:32:35.557859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.490 [2024-11-25 14:32:35.557864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.490 [2024-11-25 14:32:35.569631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.490 [2024-11-25 14:32:35.570081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.490 [2024-11-25 14:32:35.570110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.490 [2024-11-25 14:32:35.570119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.490 [2024-11-25 14:32:35.570293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.490 [2024-11-25 14:32:35.570448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.490 [2024-11-25 14:32:35.570454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.490 [2024-11-25 14:32:35.570460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.490 [2024-11-25 14:32:35.570466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.753 [2024-11-25 14:32:35.582363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.753 [2024-11-25 14:32:35.582991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.753 [2024-11-25 14:32:35.583021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.753 [2024-11-25 14:32:35.583030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.753 [2024-11-25 14:32:35.583204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.753 [2024-11-25 14:32:35.583358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.753 [2024-11-25 14:32:35.583365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.753 [2024-11-25 14:32:35.583377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.753 [2024-11-25 14:32:35.583383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.753 [2024-11-25 14:32:35.594995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.753 [2024-11-25 14:32:35.595558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.753 [2024-11-25 14:32:35.595588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.753 [2024-11-25 14:32:35.595596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.753 [2024-11-25 14:32:35.595762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.753 [2024-11-25 14:32:35.595916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.753 [2024-11-25 14:32:35.595922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.753 [2024-11-25 14:32:35.595928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.753 [2024-11-25 14:32:35.595933] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.753 [2024-11-25 14:32:35.607691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.753 [2024-11-25 14:32:35.608266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.753 [2024-11-25 14:32:35.608296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.753 [2024-11-25 14:32:35.608304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.753 [2024-11-25 14:32:35.608473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.753 [2024-11-25 14:32:35.608627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.753 [2024-11-25 14:32:35.608633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.753 [2024-11-25 14:32:35.608639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.753 [2024-11-25 14:32:35.608645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.753 [2024-11-25 14:32:35.620415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.753 [2024-11-25 14:32:35.620870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.754 [2024-11-25 14:32:35.620900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.754 [2024-11-25 14:32:35.620909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.754 [2024-11-25 14:32:35.621075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.754 [2024-11-25 14:32:35.621238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.754 [2024-11-25 14:32:35.621246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.754 [2024-11-25 14:32:35.621251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.754 [2024-11-25 14:32:35.621257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.754 [2024-11-25 14:32:35.633152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.754 [2024-11-25 14:32:35.633721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.754 [2024-11-25 14:32:35.633750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.754 [2024-11-25 14:32:35.633759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.754 [2024-11-25 14:32:35.633925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.754 [2024-11-25 14:32:35.634079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.754 [2024-11-25 14:32:35.634085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.754 [2024-11-25 14:32:35.634090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.754 [2024-11-25 14:32:35.634096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.754 [2024-11-25 14:32:35.645851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.754 [2024-11-25 14:32:35.646443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.754 [2024-11-25 14:32:35.646472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.754 [2024-11-25 14:32:35.646481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.754 [2024-11-25 14:32:35.646648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.754 [2024-11-25 14:32:35.646801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.754 [2024-11-25 14:32:35.646808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.754 [2024-11-25 14:32:35.646813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.754 [2024-11-25 14:32:35.646819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.754 [2024-11-25 14:32:35.658577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.754 [2024-11-25 14:32:35.659071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.754 [2024-11-25 14:32:35.659086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.754 [2024-11-25 14:32:35.659091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.754 [2024-11-25 14:32:35.659247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.754 [2024-11-25 14:32:35.659398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.754 [2024-11-25 14:32:35.659404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.754 [2024-11-25 14:32:35.659409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.754 [2024-11-25 14:32:35.659414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.754 [2024-11-25 14:32:35.671307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.754 [2024-11-25 14:32:35.671754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.754 [2024-11-25 14:32:35.671784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.754 [2024-11-25 14:32:35.671797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.754 [2024-11-25 14:32:35.671966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.754 [2024-11-25 14:32:35.672120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.754 [2024-11-25 14:32:35.672126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.754 [2024-11-25 14:32:35.672131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.754 [2024-11-25 14:32:35.672137] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.754 [2024-11-25 14:32:35.683998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.754 [2024-11-25 14:32:35.684567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.754 [2024-11-25 14:32:35.684597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.754 [2024-11-25 14:32:35.684605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.754 [2024-11-25 14:32:35.684772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.754 [2024-11-25 14:32:35.684926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.754 [2024-11-25 14:32:35.684932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.754 [2024-11-25 14:32:35.684937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.754 [2024-11-25 14:32:35.684943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.754 [2024-11-25 14:32:35.696698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.754 [2024-11-25 14:32:35.697260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.754 [2024-11-25 14:32:35.697290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.754 [2024-11-25 14:32:35.697299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.754 [2024-11-25 14:32:35.697467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.754 [2024-11-25 14:32:35.697620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.754 [2024-11-25 14:32:35.697626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.754 [2024-11-25 14:32:35.697632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.754 [2024-11-25 14:32:35.697638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.754 [2024-11-25 14:32:35.709397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.754 [2024-11-25 14:32:35.709984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.754 [2024-11-25 14:32:35.710014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.754 [2024-11-25 14:32:35.710023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.754 [2024-11-25 14:32:35.710196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.754 [2024-11-25 14:32:35.710354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.754 [2024-11-25 14:32:35.710361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.754 [2024-11-25 14:32:35.710366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.754 [2024-11-25 14:32:35.710372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.754 [2024-11-25 14:32:35.722130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.754 [2024-11-25 14:32:35.722677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.754 [2024-11-25 14:32:35.722708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.754 [2024-11-25 14:32:35.722716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.754 [2024-11-25 14:32:35.722883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.754 [2024-11-25 14:32:35.723036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.754 [2024-11-25 14:32:35.723043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.754 [2024-11-25 14:32:35.723048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.754 [2024-11-25 14:32:35.723053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.755 [2024-11-25 14:32:35.734809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.755 [2024-11-25 14:32:35.735424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.755 [2024-11-25 14:32:35.735454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.755 [2024-11-25 14:32:35.735463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.755 [2024-11-25 14:32:35.735629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.755 [2024-11-25 14:32:35.735782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.755 [2024-11-25 14:32:35.735789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.755 [2024-11-25 14:32:35.735794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.755 [2024-11-25 14:32:35.735800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.755 [2024-11-25 14:32:35.747561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.755 [2024-11-25 14:32:35.748093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.755 [2024-11-25 14:32:35.748107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.755 [2024-11-25 14:32:35.748112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.755 [2024-11-25 14:32:35.748269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.755 [2024-11-25 14:32:35.748420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.755 [2024-11-25 14:32:35.748425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.755 [2024-11-25 14:32:35.748434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.755 [2024-11-25 14:32:35.748439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.755 [2024-11-25 14:32:35.760184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.755 [2024-11-25 14:32:35.760721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.755 [2024-11-25 14:32:35.760751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.755 [2024-11-25 14:32:35.760760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.755 [2024-11-25 14:32:35.760927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.755 [2024-11-25 14:32:35.761080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.755 [2024-11-25 14:32:35.761086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.755 [2024-11-25 14:32:35.761091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.755 [2024-11-25 14:32:35.761097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.755 [2024-11-25 14:32:35.772867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.755 [2024-11-25 14:32:35.773433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.755 [2024-11-25 14:32:35.773463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.755 [2024-11-25 14:32:35.773471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.755 [2024-11-25 14:32:35.773637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.755 [2024-11-25 14:32:35.773791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.755 [2024-11-25 14:32:35.773797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.755 [2024-11-25 14:32:35.773803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.755 [2024-11-25 14:32:35.773808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.755 [2024-11-25 14:32:35.785560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.755 [2024-11-25 14:32:35.786138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.755 [2024-11-25 14:32:35.786173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.755 [2024-11-25 14:32:35.786182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.755 [2024-11-25 14:32:35.786348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.755 [2024-11-25 14:32:35.786501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.755 [2024-11-25 14:32:35.786508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.755 [2024-11-25 14:32:35.786513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.755 [2024-11-25 14:32:35.786518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.755 [2024-11-25 14:32:35.798271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.755 [2024-11-25 14:32:35.798826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.755 [2024-11-25 14:32:35.798856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.755 [2024-11-25 14:32:35.798865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.755 [2024-11-25 14:32:35.799031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.755 [2024-11-25 14:32:35.799192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.755 [2024-11-25 14:32:35.799199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.755 [2024-11-25 14:32:35.799204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.755 [2024-11-25 14:32:35.799210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.755 [2024-11-25 14:32:35.810963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.755 [2024-11-25 14:32:35.811452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.755 [2024-11-25 14:32:35.811482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.755 [2024-11-25 14:32:35.811491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.755 [2024-11-25 14:32:35.811657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.755 [2024-11-25 14:32:35.811811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.755 [2024-11-25 14:32:35.811817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.755 [2024-11-25 14:32:35.811823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.755 [2024-11-25 14:32:35.811828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.755 5324.00 IOPS, 20.80 MiB/s [2024-11-25T13:32:35.845Z] [2024-11-25 14:32:35.823583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.755 [2024-11-25 14:32:35.824153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.755 [2024-11-25 14:32:35.824189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.755 [2024-11-25 14:32:35.824197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.755 [2024-11-25 14:32:35.824363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.755 [2024-11-25 14:32:35.824517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.755 [2024-11-25 14:32:35.824523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.755 [2024-11-25 14:32:35.824528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.755 [2024-11-25 14:32:35.824535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:30.755 [2024-11-25 14:32:35.836290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:30.755 [2024-11-25 14:32:35.836863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.755 [2024-11-25 14:32:35.836893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:30.755 [2024-11-25 14:32:35.836904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:30.755 [2024-11-25 14:32:35.837071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:30.755 [2024-11-25 14:32:35.837230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:30.755 [2024-11-25 14:32:35.837238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:30.756 [2024-11-25 14:32:35.837243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:30.756 [2024-11-25 14:32:35.837249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.018 [2024-11-25 14:32:35.849007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.018 [2024-11-25 14:32:35.849588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.018 [2024-11-25 14:32:35.849618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.018 [2024-11-25 14:32:35.849627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.018 [2024-11-25 14:32:35.849793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.018 [2024-11-25 14:32:35.849946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.018 [2024-11-25 14:32:35.849952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.018 [2024-11-25 14:32:35.849958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.019 [2024-11-25 14:32:35.849963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.019 [2024-11-25 14:32:35.861722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.019 [2024-11-25 14:32:35.862369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.019 [2024-11-25 14:32:35.862400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.019 [2024-11-25 14:32:35.862409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.019 [2024-11-25 14:32:35.862576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.019 [2024-11-25 14:32:35.862730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.019 [2024-11-25 14:32:35.862738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.019 [2024-11-25 14:32:35.862745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.019 [2024-11-25 14:32:35.862751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.019 [2024-11-25 14:32:35.874386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.019 [2024-11-25 14:32:35.874961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.019 [2024-11-25 14:32:35.874991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.019 [2024-11-25 14:32:35.874999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.019 [2024-11-25 14:32:35.875171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.019 [2024-11-25 14:32:35.875330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.019 [2024-11-25 14:32:35.875336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.019 [2024-11-25 14:32:35.875342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.019 [2024-11-25 14:32:35.875348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.019 [2024-11-25 14:32:35.887098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.019 [2024-11-25 14:32:35.887659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.019 [2024-11-25 14:32:35.887689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.019 [2024-11-25 14:32:35.887698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.019 [2024-11-25 14:32:35.887864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.019 [2024-11-25 14:32:35.888018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.019 [2024-11-25 14:32:35.888024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.019 [2024-11-25 14:32:35.888029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.019 [2024-11-25 14:32:35.888035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.019 [2024-11-25 14:32:35.899792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.019 [2024-11-25 14:32:35.900368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.019 [2024-11-25 14:32:35.900398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.019 [2024-11-25 14:32:35.900406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.019 [2024-11-25 14:32:35.900573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.019 [2024-11-25 14:32:35.900726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.019 [2024-11-25 14:32:35.900733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.019 [2024-11-25 14:32:35.900738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.019 [2024-11-25 14:32:35.900743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.019 [2024-11-25 14:32:35.912501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.019 [2024-11-25 14:32:35.913074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.019 [2024-11-25 14:32:35.913104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.019 [2024-11-25 14:32:35.913113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.019 [2024-11-25 14:32:35.913286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.019 [2024-11-25 14:32:35.913440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.019 [2024-11-25 14:32:35.913447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.019 [2024-11-25 14:32:35.913455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.019 [2024-11-25 14:32:35.913461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.019 [2024-11-25 14:32:35.925219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.019 [2024-11-25 14:32:35.925792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.019 [2024-11-25 14:32:35.925822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.019 [2024-11-25 14:32:35.925830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.019 [2024-11-25 14:32:35.925996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.019 [2024-11-25 14:32:35.926150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.019 [2024-11-25 14:32:35.926156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.019 [2024-11-25 14:32:35.926170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.019 [2024-11-25 14:32:35.926175] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.019 [2024-11-25 14:32:35.937943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.019 [2024-11-25 14:32:35.938546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.019 [2024-11-25 14:32:35.938576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.019 [2024-11-25 14:32:35.938584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.019 [2024-11-25 14:32:35.938751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.019 [2024-11-25 14:32:35.938905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.019 [2024-11-25 14:32:35.938911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.019 [2024-11-25 14:32:35.938916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.019 [2024-11-25 14:32:35.938922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.019 [2024-11-25 14:32:35.950680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.019 [2024-11-25 14:32:35.951269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.019 [2024-11-25 14:32:35.951299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.019 [2024-11-25 14:32:35.951307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.019 [2024-11-25 14:32:35.951476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.019 [2024-11-25 14:32:35.951630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.019 [2024-11-25 14:32:35.951636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.019 [2024-11-25 14:32:35.951641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.019 [2024-11-25 14:32:35.951647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.019 [2024-11-25 14:32:35.963406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.019 [2024-11-25 14:32:35.963985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.019 [2024-11-25 14:32:35.964015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.019 [2024-11-25 14:32:35.964024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.019 [2024-11-25 14:32:35.964198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.019 [2024-11-25 14:32:35.964353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.019 [2024-11-25 14:32:35.964359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.019 [2024-11-25 14:32:35.964364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.019 [2024-11-25 14:32:35.964370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.019 [2024-11-25 14:32:35.976129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.019 [2024-11-25 14:32:35.976698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.019 [2024-11-25 14:32:35.976728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.019 [2024-11-25 14:32:35.976737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.019 [2024-11-25 14:32:35.976903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.019 [2024-11-25 14:32:35.977057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.019 [2024-11-25 14:32:35.977063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.020 [2024-11-25 14:32:35.977068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.020 [2024-11-25 14:32:35.977074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.020 [2024-11-25 14:32:35.988841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.020 [2024-11-25 14:32:35.989460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.020 [2024-11-25 14:32:35.989490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.020 [2024-11-25 14:32:35.989498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.020 [2024-11-25 14:32:35.989665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.020 [2024-11-25 14:32:35.989819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.020 [2024-11-25 14:32:35.989825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.020 [2024-11-25 14:32:35.989831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.020 [2024-11-25 14:32:35.989836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.020 [2024-11-25 14:32:36.001451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.020 [2024-11-25 14:32:36.002010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.020 [2024-11-25 14:32:36.002040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.020 [2024-11-25 14:32:36.002052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.020 [2024-11-25 14:32:36.002224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.020 [2024-11-25 14:32:36.002379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.020 [2024-11-25 14:32:36.002385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.020 [2024-11-25 14:32:36.002391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.020 [2024-11-25 14:32:36.002396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.020 [2024-11-25 14:32:36.014151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.020 [2024-11-25 14:32:36.014748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.020 [2024-11-25 14:32:36.014778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.020 [2024-11-25 14:32:36.014787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.020 [2024-11-25 14:32:36.014953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.020 [2024-11-25 14:32:36.015107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.020 [2024-11-25 14:32:36.015113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.020 [2024-11-25 14:32:36.015118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.020 [2024-11-25 14:32:36.015124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.020 [2024-11-25 14:32:36.026888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.020 [2024-11-25 14:32:36.027477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.020 [2024-11-25 14:32:36.027507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.020 [2024-11-25 14:32:36.027515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.020 [2024-11-25 14:32:36.027682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.020 [2024-11-25 14:32:36.027836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.020 [2024-11-25 14:32:36.027842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.020 [2024-11-25 14:32:36.027847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.020 [2024-11-25 14:32:36.027853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.020 [2024-11-25 14:32:36.039613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.020 [2024-11-25 14:32:36.040184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.020 [2024-11-25 14:32:36.040214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.020 [2024-11-25 14:32:36.040223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.020 [2024-11-25 14:32:36.040392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.020 [2024-11-25 14:32:36.040549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.020 [2024-11-25 14:32:36.040556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.020 [2024-11-25 14:32:36.040561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.020 [2024-11-25 14:32:36.040567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.020 [2024-11-25 14:32:36.052333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.020 [2024-11-25 14:32:36.052827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.020 [2024-11-25 14:32:36.052856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.020 [2024-11-25 14:32:36.052865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.020 [2024-11-25 14:32:36.053032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.020 [2024-11-25 14:32:36.053191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.020 [2024-11-25 14:32:36.053198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.020 [2024-11-25 14:32:36.053204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.020 [2024-11-25 14:32:36.053210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.020 [2024-11-25 14:32:36.064965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.020 [2024-11-25 14:32:36.065528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.020 [2024-11-25 14:32:36.065558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.020 [2024-11-25 14:32:36.065567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.020 [2024-11-25 14:32:36.065733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.020 [2024-11-25 14:32:36.065887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.020 [2024-11-25 14:32:36.065893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.020 [2024-11-25 14:32:36.065898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.020 [2024-11-25 14:32:36.065904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.020 [2024-11-25 14:32:36.077707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.020 [2024-11-25 14:32:36.078212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.020 [2024-11-25 14:32:36.078227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.020 [2024-11-25 14:32:36.078233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.020 [2024-11-25 14:32:36.078384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.020 [2024-11-25 14:32:36.078543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.020 [2024-11-25 14:32:36.078549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.020 [2024-11-25 14:32:36.078554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.020 [2024-11-25 14:32:36.078562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.020 [2024-11-25 14:32:36.090321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.020 [2024-11-25 14:32:36.090819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.020 [2024-11-25 14:32:36.090833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.020 [2024-11-25 14:32:36.090838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.020 [2024-11-25 14:32:36.090989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.020 [2024-11-25 14:32:36.091140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.020 [2024-11-25 14:32:36.091146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.020 [2024-11-25 14:32:36.091151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.020 [2024-11-25 14:32:36.091156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.020 [2024-11-25 14:32:36.103051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.020 [2024-11-25 14:32:36.103502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.020 [2024-11-25 14:32:36.103515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.020 [2024-11-25 14:32:36.103520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.020 [2024-11-25 14:32:36.103670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.020 [2024-11-25 14:32:36.103821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.020 [2024-11-25 14:32:36.103826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.021 [2024-11-25 14:32:36.103831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.021 [2024-11-25 14:32:36.103836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.283 [2024-11-25 14:32:36.115726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.283 [2024-11-25 14:32:36.116235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.283 [2024-11-25 14:32:36.116248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.283 [2024-11-25 14:32:36.116254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.283 [2024-11-25 14:32:36.116404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.283 [2024-11-25 14:32:36.116555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.283 [2024-11-25 14:32:36.116561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.283 [2024-11-25 14:32:36.116566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.283 [2024-11-25 14:32:36.116570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.283 [2024-11-25 14:32:36.128335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.283 [2024-11-25 14:32:36.128914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.283 [2024-11-25 14:32:36.128944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.283 [2024-11-25 14:32:36.128953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.283 [2024-11-25 14:32:36.129120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.283 [2024-11-25 14:32:36.129279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.283 [2024-11-25 14:32:36.129286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.283 [2024-11-25 14:32:36.129292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.283 [2024-11-25 14:32:36.129297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.283 [2024-11-25 14:32:36.141056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.283 [2024-11-25 14:32:36.141542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.283 [2024-11-25 14:32:36.141557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.283 [2024-11-25 14:32:36.141563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.283 [2024-11-25 14:32:36.141714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.284 [2024-11-25 14:32:36.141864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.284 [2024-11-25 14:32:36.141869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.284 [2024-11-25 14:32:36.141874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.284 [2024-11-25 14:32:36.141879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.284 [2024-11-25 14:32:36.153781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.284 [2024-11-25 14:32:36.154295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.284 [2024-11-25 14:32:36.154325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.284 [2024-11-25 14:32:36.154334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.284 [2024-11-25 14:32:36.154503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.284 [2024-11-25 14:32:36.154657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.284 [2024-11-25 14:32:36.154663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.284 [2024-11-25 14:32:36.154668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.284 [2024-11-25 14:32:36.154674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.284 [2024-11-25 14:32:36.166442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.284 [2024-11-25 14:32:36.167017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.284 [2024-11-25 14:32:36.167048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.284 [2024-11-25 14:32:36.167056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.284 [2024-11-25 14:32:36.167231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.284 [2024-11-25 14:32:36.167386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.284 [2024-11-25 14:32:36.167392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.284 [2024-11-25 14:32:36.167398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.284 [2024-11-25 14:32:36.167403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.284 [2024-11-25 14:32:36.179177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.284 [2024-11-25 14:32:36.179673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.284 [2024-11-25 14:32:36.179687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.284 [2024-11-25 14:32:36.179693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.284 [2024-11-25 14:32:36.179844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.284 [2024-11-25 14:32:36.179994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.284 [2024-11-25 14:32:36.180000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.284 [2024-11-25 14:32:36.180005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.284 [2024-11-25 14:32:36.180010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.284 [2024-11-25 14:32:36.191914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.284 [2024-11-25 14:32:36.192494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.284 [2024-11-25 14:32:36.192525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.284 [2024-11-25 14:32:36.192533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.284 [2024-11-25 14:32:36.192702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.284 [2024-11-25 14:32:36.192855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.284 [2024-11-25 14:32:36.192862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.284 [2024-11-25 14:32:36.192867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.284 [2024-11-25 14:32:36.192873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.284 [2024-11-25 14:32:36.204634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.284 [2024-11-25 14:32:36.205164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.284 [2024-11-25 14:32:36.205179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.284 [2024-11-25 14:32:36.205185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.284 [2024-11-25 14:32:36.205336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.284 [2024-11-25 14:32:36.205486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.284 [2024-11-25 14:32:36.205499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.284 [2024-11-25 14:32:36.205504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.284 [2024-11-25 14:32:36.205509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.284 [2024-11-25 14:32:36.217266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.284 [2024-11-25 14:32:36.217713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.284 [2024-11-25 14:32:36.217726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.284 [2024-11-25 14:32:36.217731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.284 [2024-11-25 14:32:36.217882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.284 [2024-11-25 14:32:36.218033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.284 [2024-11-25 14:32:36.218038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.284 [2024-11-25 14:32:36.218043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.284 [2024-11-25 14:32:36.218048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.284 [2024-11-25 14:32:36.229950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.284 [2024-11-25 14:32:36.230577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.284 [2024-11-25 14:32:36.230606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.284 [2024-11-25 14:32:36.230615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.284 [2024-11-25 14:32:36.230781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.284 [2024-11-25 14:32:36.230935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.284 [2024-11-25 14:32:36.230941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.284 [2024-11-25 14:32:36.230947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.284 [2024-11-25 14:32:36.230953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.284 [2024-11-25 14:32:36.242569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.284 [2024-11-25 14:32:36.243187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.284 [2024-11-25 14:32:36.243218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.284 [2024-11-25 14:32:36.243227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.284 [2024-11-25 14:32:36.243393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.284 [2024-11-25 14:32:36.243547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.284 [2024-11-25 14:32:36.243553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.284 [2024-11-25 14:32:36.243559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.284 [2024-11-25 14:32:36.243568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.284 [2024-11-25 14:32:36.255192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.284 [2024-11-25 14:32:36.255843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.284 [2024-11-25 14:32:36.255873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.284 [2024-11-25 14:32:36.255881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.284 [2024-11-25 14:32:36.256048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.284 [2024-11-25 14:32:36.256207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.284 [2024-11-25 14:32:36.256215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.284 [2024-11-25 14:32:36.256220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.284 [2024-11-25 14:32:36.256226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.284 [2024-11-25 14:32:36.267837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.284 [2024-11-25 14:32:36.268309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.284 [2024-11-25 14:32:36.268325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.284 [2024-11-25 14:32:36.268331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.284 [2024-11-25 14:32:36.268482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.285 [2024-11-25 14:32:36.268632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.285 [2024-11-25 14:32:36.268638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.285 [2024-11-25 14:32:36.268643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.285 [2024-11-25 14:32:36.268648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.285 [2024-11-25 14:32:36.280552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.285 [2024-11-25 14:32:36.281034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.285 [2024-11-25 14:32:36.281047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.285 [2024-11-25 14:32:36.281053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.285 [2024-11-25 14:32:36.281207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.285 [2024-11-25 14:32:36.281358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.285 [2024-11-25 14:32:36.281365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.285 [2024-11-25 14:32:36.281370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.285 [2024-11-25 14:32:36.281375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.285 [2024-11-25 14:32:36.293267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.285 [2024-11-25 14:32:36.293835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.285 [2024-11-25 14:32:36.293865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.285 [2024-11-25 14:32:36.293873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.285 [2024-11-25 14:32:36.294042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.285 [2024-11-25 14:32:36.294201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.285 [2024-11-25 14:32:36.294209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.285 [2024-11-25 14:32:36.294215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.285 [2024-11-25 14:32:36.294220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.285 [2024-11-25 14:32:36.305974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.285 [2024-11-25 14:32:36.306424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.285 [2024-11-25 14:32:36.306439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.285 [2024-11-25 14:32:36.306445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.285 [2024-11-25 14:32:36.306596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.285 [2024-11-25 14:32:36.306746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.285 [2024-11-25 14:32:36.306752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.285 [2024-11-25 14:32:36.306757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.285 [2024-11-25 14:32:36.306761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3616684 Killed "${NVMF_APP[@]}" "$@" 00:34:31.285 14:32:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:31.285 14:32:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:31.285 14:32:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:31.285 14:32:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:31.285 14:32:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:31.285 [2024-11-25 14:32:36.318652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.285 [2024-11-25 14:32:36.319104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.285 [2024-11-25 14:32:36.319116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.285 [2024-11-25 14:32:36.319122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.285 [2024-11-25 14:32:36.319276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.285 [2024-11-25 14:32:36.319435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.285 [2024-11-25 14:32:36.319442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.285 [2024-11-25 14:32:36.319447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.285 [2024-11-25 14:32:36.319456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.285 14:32:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3618238 00:34:31.285 14:32:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3618238 00:34:31.285 14:32:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:31.285 14:32:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3618238 ']' 00:34:31.285 14:32:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:31.285 14:32:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:31.285 14:32:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:31.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:31.285 14:32:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:31.285 14:32:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:31.285 [2024-11-25 14:32:36.331361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.285 [2024-11-25 14:32:36.331725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.285 [2024-11-25 14:32:36.331738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.285 [2024-11-25 14:32:36.331743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.285 [2024-11-25 14:32:36.331894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.285 [2024-11-25 14:32:36.332044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.285 [2024-11-25 14:32:36.332050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.285 [2024-11-25 14:32:36.332055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.285 [2024-11-25 14:32:36.332060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.285 [2024-11-25 14:32:36.344104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.285 [2024-11-25 14:32:36.344573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.285 [2024-11-25 14:32:36.344586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.285 [2024-11-25 14:32:36.344592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.285 [2024-11-25 14:32:36.344742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.285 [2024-11-25 14:32:36.344893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.285 [2024-11-25 14:32:36.344899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.285 [2024-11-25 14:32:36.344904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.285 [2024-11-25 14:32:36.344909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.285 [2024-11-25 14:32:36.356805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.285 [2024-11-25 14:32:36.357162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.285 [2024-11-25 14:32:36.357176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.285 [2024-11-25 14:32:36.357185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.285 [2024-11-25 14:32:36.357336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.285 [2024-11-25 14:32:36.357487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.285 [2024-11-25 14:32:36.357492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.285 [2024-11-25 14:32:36.357498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.285 [2024-11-25 14:32:36.357502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.285 [2024-11-25 14:32:36.369547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.285 [2024-11-25 14:32:36.369862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.285 [2024-11-25 14:32:36.369875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.285 [2024-11-25 14:32:36.369880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.285 [2024-11-25 14:32:36.370031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.285 [2024-11-25 14:32:36.370185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.285 [2024-11-25 14:32:36.370191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.285 [2024-11-25 14:32:36.370197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.285 [2024-11-25 14:32:36.370201] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.549 [2024-11-25 14:32:36.377750] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:34:31.549 [2024-11-25 14:32:36.377796] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:31.549 [2024-11-25 14:32:36.382254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.549 [2024-11-25 14:32:36.382766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.549 [2024-11-25 14:32:36.382778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.549 [2024-11-25 14:32:36.382784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.549 [2024-11-25 14:32:36.382935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.549 [2024-11-25 14:32:36.383087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.549 [2024-11-25 14:32:36.383093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.549 [2024-11-25 14:32:36.383099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.549 [2024-11-25 14:32:36.383104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.549 [2024-11-25 14:32:36.394865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.549 [2024-11-25 14:32:36.395202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.549 [2024-11-25 14:32:36.395222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.549 [2024-11-25 14:32:36.395232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.549 [2024-11-25 14:32:36.395389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.549 [2024-11-25 14:32:36.395541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.549 [2024-11-25 14:32:36.395547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.549 [2024-11-25 14:32:36.395553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.549 [2024-11-25 14:32:36.395558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.549 [2024-11-25 14:32:36.407602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.549 [2024-11-25 14:32:36.408166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.549 [2024-11-25 14:32:36.408196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.549 [2024-11-25 14:32:36.408205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.549 [2024-11-25 14:32:36.408372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.549 [2024-11-25 14:32:36.408526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.549 [2024-11-25 14:32:36.408532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.549 [2024-11-25 14:32:36.408538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.549 [2024-11-25 14:32:36.408544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.549 [2024-11-25 14:32:36.420322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.549 [2024-11-25 14:32:36.420827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.549 [2024-11-25 14:32:36.420841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.549 [2024-11-25 14:32:36.420847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.549 [2024-11-25 14:32:36.420998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.549 [2024-11-25 14:32:36.421149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.549 [2024-11-25 14:32:36.421155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.550 [2024-11-25 14:32:36.421240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.550 [2024-11-25 14:32:36.421245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.550 [2024-11-25 14:32:36.433009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.550 [2024-11-25 14:32:36.433587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.550 [2024-11-25 14:32:36.433617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.550 [2024-11-25 14:32:36.433627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.550 [2024-11-25 14:32:36.433793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.550 [2024-11-25 14:32:36.433951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.550 [2024-11-25 14:32:36.433957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.550 [2024-11-25 14:32:36.433963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.550 [2024-11-25 14:32:36.433968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.550 [2024-11-25 14:32:36.445744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.550 [2024-11-25 14:32:36.446359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.550 [2024-11-25 14:32:36.446390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.550 [2024-11-25 14:32:36.446399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.550 [2024-11-25 14:32:36.446568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.550 [2024-11-25 14:32:36.446722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.550 [2024-11-25 14:32:36.446728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.550 [2024-11-25 14:32:36.446734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.550 [2024-11-25 14:32:36.446740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.550 [2024-11-25 14:32:36.458364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.550 [2024-11-25 14:32:36.458713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.550 [2024-11-25 14:32:36.458728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.550 [2024-11-25 14:32:36.458734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.550 [2024-11-25 14:32:36.458884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.550 [2024-11-25 14:32:36.459036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.550 [2024-11-25 14:32:36.459041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.550 [2024-11-25 14:32:36.459047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.550 [2024-11-25 14:32:36.459052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.550 [2024-11-25 14:32:36.470515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:31.550 [2024-11-25 14:32:36.471097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.550 [2024-11-25 14:32:36.471646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.550 [2024-11-25 14:32:36.471677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.550 [2024-11-25 14:32:36.471686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.550 [2024-11-25 14:32:36.471852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.550 [2024-11-25 14:32:36.472006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.550 [2024-11-25 14:32:36.472016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.550 [2024-11-25 14:32:36.472022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.550 [2024-11-25 14:32:36.472027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.550 [2024-11-25 14:32:36.483808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.550 [2024-11-25 14:32:36.484464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.550 [2024-11-25 14:32:36.484495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.550 [2024-11-25 14:32:36.484504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.550 [2024-11-25 14:32:36.484672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.550 [2024-11-25 14:32:36.484826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.550 [2024-11-25 14:32:36.484832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.550 [2024-11-25 14:32:36.484838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.550 [2024-11-25 14:32:36.484844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.550 [2024-11-25 14:32:36.496472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.550 [2024-11-25 14:32:36.497066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.550 [2024-11-25 14:32:36.497097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.550 [2024-11-25 14:32:36.497106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.550 [2024-11-25 14:32:36.497282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.550 [2024-11-25 14:32:36.497436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.550 [2024-11-25 14:32:36.497442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.550 [2024-11-25 14:32:36.497448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.550 [2024-11-25 14:32:36.497454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.550 [2024-11-25 14:32:36.499805] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:31.550 [2024-11-25 14:32:36.499827] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:31.550 [2024-11-25 14:32:36.499833] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:31.550 [2024-11-25 14:32:36.499839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:31.550 [2024-11-25 14:32:36.499844] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:31.550 [2024-11-25 14:32:36.500924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:31.550 [2024-11-25 14:32:36.501076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:31.550 [2024-11-25 14:32:36.501078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:31.550 [2024-11-25 14:32:36.509107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.550 [2024-11-25 14:32:36.509400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.550 [2024-11-25 14:32:36.509416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.550 [2024-11-25 14:32:36.509427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.550 [2024-11-25 14:32:36.509580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.550 [2024-11-25 14:32:36.509731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.550 [2024-11-25 14:32:36.509738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.550 [2024-11-25 14:32:36.509743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.550 [2024-11-25 14:32:36.509748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.550 [2024-11-25 14:32:36.521836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.550 [2024-11-25 14:32:36.522411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.550 [2024-11-25 14:32:36.522426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.550 [2024-11-25 14:32:36.522432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.550 [2024-11-25 14:32:36.522586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.550 [2024-11-25 14:32:36.522738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.550 [2024-11-25 14:32:36.522744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.550 [2024-11-25 14:32:36.522750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.551 [2024-11-25 14:32:36.522755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.551 [2024-11-25 14:32:36.534533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.551 [2024-11-25 14:32:36.535052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.551 [2024-11-25 14:32:36.535067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.551 [2024-11-25 14:32:36.535072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.551 [2024-11-25 14:32:36.535228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.551 [2024-11-25 14:32:36.535380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.551 [2024-11-25 14:32:36.535386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.551 [2024-11-25 14:32:36.535392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.551 [2024-11-25 14:32:36.535397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.551 [2024-11-25 14:32:36.547166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.551 [2024-11-25 14:32:36.547632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.551 [2024-11-25 14:32:36.547645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.551 [2024-11-25 14:32:36.547651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.551 [2024-11-25 14:32:36.547802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.551 [2024-11-25 14:32:36.547958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.551 [2024-11-25 14:32:36.547964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.551 [2024-11-25 14:32:36.547969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.551 [2024-11-25 14:32:36.547974] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.551 [2024-11-25 14:32:36.559882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.551 [2024-11-25 14:32:36.560470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.551 [2024-11-25 14:32:36.560503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.551 [2024-11-25 14:32:36.560513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.551 [2024-11-25 14:32:36.560684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.551 [2024-11-25 14:32:36.560838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.551 [2024-11-25 14:32:36.560845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.551 [2024-11-25 14:32:36.560850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.551 [2024-11-25 14:32:36.560856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.551 [2024-11-25 14:32:36.572638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.551 [2024-11-25 14:32:36.573147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.551 [2024-11-25 14:32:36.573165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.551 [2024-11-25 14:32:36.573171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.551 [2024-11-25 14:32:36.573322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.551 [2024-11-25 14:32:36.573472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.551 [2024-11-25 14:32:36.573478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.551 [2024-11-25 14:32:36.573483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.551 [2024-11-25 14:32:36.573488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.551 [2024-11-25 14:32:36.585389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.551 [2024-11-25 14:32:36.585845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.551 [2024-11-25 14:32:36.585858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.551 [2024-11-25 14:32:36.585863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.551 [2024-11-25 14:32:36.586014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.551 [2024-11-25 14:32:36.586167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.551 [2024-11-25 14:32:36.586173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.551 [2024-11-25 14:32:36.586182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.551 [2024-11-25 14:32:36.586186] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.551 [2024-11-25 14:32:36.598082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.551 [2024-11-25 14:32:36.598658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.551 [2024-11-25 14:32:36.598689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.551 [2024-11-25 14:32:36.598698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.551 [2024-11-25 14:32:36.598867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.551 [2024-11-25 14:32:36.599021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.551 [2024-11-25 14:32:36.599027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.551 [2024-11-25 14:32:36.599033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.551 [2024-11-25 14:32:36.599039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.551 [2024-11-25 14:32:36.610808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.551 [2024-11-25 14:32:36.611426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.551 [2024-11-25 14:32:36.611457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.551 [2024-11-25 14:32:36.611466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.551 [2024-11-25 14:32:36.611633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.551 [2024-11-25 14:32:36.611786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.551 [2024-11-25 14:32:36.611793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.551 [2024-11-25 14:32:36.611798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.551 [2024-11-25 14:32:36.611803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.551 [2024-11-25 14:32:36.623439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.551 [2024-11-25 14:32:36.623924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.551 [2024-11-25 14:32:36.623939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.551 [2024-11-25 14:32:36.623945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.551 [2024-11-25 14:32:36.624096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.551 [2024-11-25 14:32:36.624250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.551 [2024-11-25 14:32:36.624256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.551 [2024-11-25 14:32:36.624262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.551 [2024-11-25 14:32:36.624267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.551 [2024-11-25 14:32:36.636174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.814 [2024-11-25 14:32:36.636607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.814 [2024-11-25 14:32:36.636621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.814 [2024-11-25 14:32:36.636627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.814 [2024-11-25 14:32:36.636778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.814 [2024-11-25 14:32:36.636928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.814 [2024-11-25 14:32:36.636934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.814 [2024-11-25 14:32:36.636939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.814 [2024-11-25 14:32:36.636944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.814 [2024-11-25 14:32:36.648841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.814 [2024-11-25 14:32:36.649471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.814 [2024-11-25 14:32:36.649501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.814 [2024-11-25 14:32:36.649510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.814 [2024-11-25 14:32:36.649677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.814 [2024-11-25 14:32:36.649830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.814 [2024-11-25 14:32:36.649837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.814 [2024-11-25 14:32:36.649842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.814 [2024-11-25 14:32:36.649848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.814 [2024-11-25 14:32:36.661472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.814 [2024-11-25 14:32:36.662063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.814 [2024-11-25 14:32:36.662094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.814 [2024-11-25 14:32:36.662102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.815 [2024-11-25 14:32:36.662275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.815 [2024-11-25 14:32:36.662429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.815 [2024-11-25 14:32:36.662435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.815 [2024-11-25 14:32:36.662441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.815 [2024-11-25 14:32:36.662447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.815 [2024-11-25 14:32:36.674220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.815 [2024-11-25 14:32:36.674834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.815 [2024-11-25 14:32:36.674864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.815 [2024-11-25 14:32:36.674878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.815 [2024-11-25 14:32:36.675044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.815 [2024-11-25 14:32:36.675204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.815 [2024-11-25 14:32:36.675212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.815 [2024-11-25 14:32:36.675217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.815 [2024-11-25 14:32:36.675223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.815 [2024-11-25 14:32:36.686840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.815 [2024-11-25 14:32:36.687401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.815 [2024-11-25 14:32:36.687432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.815 [2024-11-25 14:32:36.687440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.815 [2024-11-25 14:32:36.687607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.815 [2024-11-25 14:32:36.687761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.815 [2024-11-25 14:32:36.687767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.815 [2024-11-25 14:32:36.687773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.815 [2024-11-25 14:32:36.687778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.815 [2024-11-25 14:32:36.699640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.815 [2024-11-25 14:32:36.700125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.815 [2024-11-25 14:32:36.700155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.815 [2024-11-25 14:32:36.700170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.815 [2024-11-25 14:32:36.700337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.815 [2024-11-25 14:32:36.700491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.815 [2024-11-25 14:32:36.700497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.815 [2024-11-25 14:32:36.700503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.815 [2024-11-25 14:32:36.700508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.815 [2024-11-25 14:32:36.712274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.815 [2024-11-25 14:32:36.712775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.815 [2024-11-25 14:32:36.712790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.815 [2024-11-25 14:32:36.712795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.815 [2024-11-25 14:32:36.712946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.815 [2024-11-25 14:32:36.713101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.815 [2024-11-25 14:32:36.713107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.815 [2024-11-25 14:32:36.713112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.815 [2024-11-25 14:32:36.713117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.815 [2024-11-25 14:32:36.724887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.815 [2024-11-25 14:32:36.725450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.815 [2024-11-25 14:32:36.725481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.815 [2024-11-25 14:32:36.725489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.815 [2024-11-25 14:32:36.725656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.815 [2024-11-25 14:32:36.725810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.815 [2024-11-25 14:32:36.725817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.815 [2024-11-25 14:32:36.725822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.815 [2024-11-25 14:32:36.725828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.815 [2024-11-25 14:32:36.737592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.815 [2024-11-25 14:32:36.738187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.815 [2024-11-25 14:32:36.738217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.815 [2024-11-25 14:32:36.738226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.815 [2024-11-25 14:32:36.738394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.815 [2024-11-25 14:32:36.738547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.815 [2024-11-25 14:32:36.738553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.815 [2024-11-25 14:32:36.738559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.815 [2024-11-25 14:32:36.738564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.815 [2024-11-25 14:32:36.750323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.815 [2024-11-25 14:32:36.750776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.815 [2024-11-25 14:32:36.750805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.815 [2024-11-25 14:32:36.750814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.815 [2024-11-25 14:32:36.750980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.815 [2024-11-25 14:32:36.751134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.815 [2024-11-25 14:32:36.751140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.815 [2024-11-25 14:32:36.751149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.815 [2024-11-25 14:32:36.751155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.815 [2024-11-25 14:32:36.763105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.815 [2024-11-25 14:32:36.763669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.815 [2024-11-25 14:32:36.763699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.815 [2024-11-25 14:32:36.763708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.815 [2024-11-25 14:32:36.763875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.815 [2024-11-25 14:32:36.764028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.815 [2024-11-25 14:32:36.764035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.815 [2024-11-25 14:32:36.764040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.815 [2024-11-25 14:32:36.764046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.815 [2024-11-25 14:32:36.775808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.815 [2024-11-25 14:32:36.776184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.815 [2024-11-25 14:32:36.776200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.815 [2024-11-25 14:32:36.776205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.815 [2024-11-25 14:32:36.776356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.815 [2024-11-25 14:32:36.776507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.815 [2024-11-25 14:32:36.776512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.815 [2024-11-25 14:32:36.776517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.815 [2024-11-25 14:32:36.776523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.815 [2024-11-25 14:32:36.788556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.815 [2024-11-25 14:32:36.789052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.816 [2024-11-25 14:32:36.789064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.816 [2024-11-25 14:32:36.789069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.816 [2024-11-25 14:32:36.789224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.816 [2024-11-25 14:32:36.789375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.816 [2024-11-25 14:32:36.789381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.816 [2024-11-25 14:32:36.789386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.816 [2024-11-25 14:32:36.789390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.816 [2024-11-25 14:32:36.801274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.816 [2024-11-25 14:32:36.801752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.816 [2024-11-25 14:32:36.801782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.816 [2024-11-25 14:32:36.801790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.816 [2024-11-25 14:32:36.801957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.816 [2024-11-25 14:32:36.802111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.816 [2024-11-25 14:32:36.802117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.816 [2024-11-25 14:32:36.802123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.816 [2024-11-25 14:32:36.802128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.816 [2024-11-25 14:32:36.814027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.816 [2024-11-25 14:32:36.814601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.816 [2024-11-25 14:32:36.814631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.816 [2024-11-25 14:32:36.814640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.816 [2024-11-25 14:32:36.814806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.816 [2024-11-25 14:32:36.814960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.816 [2024-11-25 14:32:36.814966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.816 [2024-11-25 14:32:36.814972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.816 [2024-11-25 14:32:36.814977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.816 4436.67 IOPS, 17.33 MiB/s [2024-11-25T13:32:36.906Z] [2024-11-25 14:32:36.826764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.816 [2024-11-25 14:32:36.827286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.816 [2024-11-25 14:32:36.827300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.816 [2024-11-25 14:32:36.827306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.816 [2024-11-25 14:32:36.827458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.816 [2024-11-25 14:32:36.827608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.816 [2024-11-25 14:32:36.827614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.816 [2024-11-25 14:32:36.827619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.816 [2024-11-25 14:32:36.827624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.816 [2024-11-25 14:32:36.839514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.816 [2024-11-25 14:32:36.839854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.816 [2024-11-25 14:32:36.839868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.816 [2024-11-25 14:32:36.839878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.816 [2024-11-25 14:32:36.840029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.816 [2024-11-25 14:32:36.840186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.816 [2024-11-25 14:32:36.840193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.816 [2024-11-25 14:32:36.840199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.816 [2024-11-25 14:32:36.840204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.816 [2024-11-25 14:32:36.852238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.816 [2024-11-25 14:32:36.852843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.816 [2024-11-25 14:32:36.852873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.816 [2024-11-25 14:32:36.852882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.816 [2024-11-25 14:32:36.853049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.816 [2024-11-25 14:32:36.853208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.816 [2024-11-25 14:32:36.853215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.816 [2024-11-25 14:32:36.853221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.816 [2024-11-25 14:32:36.853226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.816 [2024-11-25 14:32:36.864975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.816 [2024-11-25 14:32:36.865517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.816 [2024-11-25 14:32:36.865547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.816 [2024-11-25 14:32:36.865556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.816 [2024-11-25 14:32:36.865722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.816 [2024-11-25 14:32:36.865876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.816 [2024-11-25 14:32:36.865882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.816 [2024-11-25 14:32:36.865888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.816 [2024-11-25 14:32:36.865893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.816 [2024-11-25 14:32:36.877652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.816 [2024-11-25 14:32:36.878125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.816 [2024-11-25 14:32:36.878155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.816 [2024-11-25 14:32:36.878169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.816 [2024-11-25 14:32:36.878336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.816 [2024-11-25 14:32:36.878493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.816 [2024-11-25 14:32:36.878500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.816 [2024-11-25 14:32:36.878505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.816 [2024-11-25 14:32:36.878511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:31.816 [2024-11-25 14:32:36.890268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:31.816 [2024-11-25 14:32:36.890727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:31.816 [2024-11-25 14:32:36.890741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:31.816 [2024-11-25 14:32:36.890747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:31.816 [2024-11-25 14:32:36.890898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:31.816 [2024-11-25 14:32:36.891048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:31.816 [2024-11-25 14:32:36.891054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:31.816 [2024-11-25 14:32:36.891059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:31.816 [2024-11-25 14:32:36.891064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.081 [2024-11-25 14:32:36.902952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.081 [2024-11-25 14:32:36.903408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.081 [2024-11-25 14:32:36.903421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.081 [2024-11-25 14:32:36.903427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.081 [2024-11-25 14:32:36.903578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.081 [2024-11-25 14:32:36.903728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.081 [2024-11-25 14:32:36.903734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.081 [2024-11-25 14:32:36.903739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.081 [2024-11-25 14:32:36.903743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.081 [2024-11-25 14:32:36.915633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.081 [2024-11-25 14:32:36.916093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.081 [2024-11-25 14:32:36.916105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.081 [2024-11-25 14:32:36.916110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.081 [2024-11-25 14:32:36.916265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.081 [2024-11-25 14:32:36.916416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.081 [2024-11-25 14:32:36.916422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.081 [2024-11-25 14:32:36.916430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.081 [2024-11-25 14:32:36.916435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.081 [2024-11-25 14:32:36.928330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.081 [2024-11-25 14:32:36.928808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.081 [2024-11-25 14:32:36.928838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.081 [2024-11-25 14:32:36.928846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.081 [2024-11-25 14:32:36.929013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.081 [2024-11-25 14:32:36.929172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.081 [2024-11-25 14:32:36.929179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.081 [2024-11-25 14:32:36.929185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.081 [2024-11-25 14:32:36.929191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.081 [2024-11-25 14:32:36.940938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.081 [2024-11-25 14:32:36.941393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.081 [2024-11-25 14:32:36.941422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.081 [2024-11-25 14:32:36.941431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.081 [2024-11-25 14:32:36.941598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.081 [2024-11-25 14:32:36.941751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.081 [2024-11-25 14:32:36.941757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.081 [2024-11-25 14:32:36.941762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.081 [2024-11-25 14:32:36.941768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.081 [2024-11-25 14:32:36.953667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.081 [2024-11-25 14:32:36.954239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.081 [2024-11-25 14:32:36.954269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.081 [2024-11-25 14:32:36.954277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.081 [2024-11-25 14:32:36.954446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.081 [2024-11-25 14:32:36.954600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.081 [2024-11-25 14:32:36.954606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.081 [2024-11-25 14:32:36.954611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.081 [2024-11-25 14:32:36.954617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.081 [2024-11-25 14:32:36.966376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.081 [2024-11-25 14:32:36.966968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.081 [2024-11-25 14:32:36.966998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.081 [2024-11-25 14:32:36.967006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.081 [2024-11-25 14:32:36.967179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.081 [2024-11-25 14:32:36.967333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.081 [2024-11-25 14:32:36.967340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.081 [2024-11-25 14:32:36.967345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.081 [2024-11-25 14:32:36.967351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.081 [2024-11-25 14:32:36.979106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.081 [2024-11-25 14:32:36.979703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.081 [2024-11-25 14:32:36.979733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.081 [2024-11-25 14:32:36.979742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.081 [2024-11-25 14:32:36.979908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.081 [2024-11-25 14:32:36.980062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.081 [2024-11-25 14:32:36.980068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.081 [2024-11-25 14:32:36.980074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.081 [2024-11-25 14:32:36.980079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.081 [2024-11-25 14:32:36.991835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.081 [2024-11-25 14:32:36.992320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.081 [2024-11-25 14:32:36.992335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.081 [2024-11-25 14:32:36.992343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.081 [2024-11-25 14:32:36.992495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.081 [2024-11-25 14:32:36.992645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.081 [2024-11-25 14:32:36.992651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.081 [2024-11-25 14:32:36.992656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.081 [2024-11-25 14:32:36.992662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.081 [2024-11-25 14:32:37.004551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.081 [2024-11-25 14:32:37.005007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.081 [2024-11-25 14:32:37.005019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.081 [2024-11-25 14:32:37.005028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.081 [2024-11-25 14:32:37.005183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.081 [2024-11-25 14:32:37.005334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.081 [2024-11-25 14:32:37.005340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.081 [2024-11-25 14:32:37.005345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.081 [2024-11-25 14:32:37.005350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.081 [2024-11-25 14:32:37.017246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.081 [2024-11-25 14:32:37.017732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.081 [2024-11-25 14:32:37.017744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.082 [2024-11-25 14:32:37.017750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.082 [2024-11-25 14:32:37.017901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.082 [2024-11-25 14:32:37.018051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.082 [2024-11-25 14:32:37.018057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.082 [2024-11-25 14:32:37.018062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.082 [2024-11-25 14:32:37.018068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.082 [2024-11-25 14:32:37.029972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.082 [2024-11-25 14:32:37.030498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.082 [2024-11-25 14:32:37.030528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.082 [2024-11-25 14:32:37.030537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.082 [2024-11-25 14:32:37.030703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.082 [2024-11-25 14:32:37.030857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.082 [2024-11-25 14:32:37.030863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.082 [2024-11-25 14:32:37.030869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.082 [2024-11-25 14:32:37.030874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.082 [2024-11-25 14:32:37.042635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.082 [2024-11-25 14:32:37.043252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.082 [2024-11-25 14:32:37.043282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.082 [2024-11-25 14:32:37.043291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.082 [2024-11-25 14:32:37.043460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.082 [2024-11-25 14:32:37.043617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.082 [2024-11-25 14:32:37.043624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.082 [2024-11-25 14:32:37.043630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.082 [2024-11-25 14:32:37.043635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.082 [2024-11-25 14:32:37.055256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.082 [2024-11-25 14:32:37.055873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.082 [2024-11-25 14:32:37.055903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.082 [2024-11-25 14:32:37.055911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.082 [2024-11-25 14:32:37.056078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.082 [2024-11-25 14:32:37.056237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.082 [2024-11-25 14:32:37.056244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.082 [2024-11-25 14:32:37.056250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.082 [2024-11-25 14:32:37.056256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.082 [2024-11-25 14:32:37.067878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.082 [2024-11-25 14:32:37.068325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.082 [2024-11-25 14:32:37.068340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.082 [2024-11-25 14:32:37.068346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.082 [2024-11-25 14:32:37.068496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.082 [2024-11-25 14:32:37.068647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.082 [2024-11-25 14:32:37.068653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.082 [2024-11-25 14:32:37.068658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.082 [2024-11-25 14:32:37.068662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.082 [2024-11-25 14:32:37.080564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.082 [2024-11-25 14:32:37.081120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.082 [2024-11-25 14:32:37.081151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.082 [2024-11-25 14:32:37.081165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.082 [2024-11-25 14:32:37.081332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.082 [2024-11-25 14:32:37.081486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.082 [2024-11-25 14:32:37.081492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.082 [2024-11-25 14:32:37.081498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.082 [2024-11-25 14:32:37.081507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.082 [2024-11-25 14:32:37.093296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.082 [2024-11-25 14:32:37.093753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.082 [2024-11-25 14:32:37.093783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.082 [2024-11-25 14:32:37.093792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.082 [2024-11-25 14:32:37.093959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.082 [2024-11-25 14:32:37.094114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.082 [2024-11-25 14:32:37.094121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.082 [2024-11-25 14:32:37.094127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.082 [2024-11-25 14:32:37.094134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.082 [2024-11-25 14:32:37.106037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.082 [2024-11-25 14:32:37.106600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.082 [2024-11-25 14:32:37.106630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.082 [2024-11-25 14:32:37.106639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.082 [2024-11-25 14:32:37.106805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.082 [2024-11-25 14:32:37.106959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.082 [2024-11-25 14:32:37.106966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.082 [2024-11-25 14:32:37.106971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.082 [2024-11-25 14:32:37.106977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.082 [2024-11-25 14:32:37.118739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.082 [2024-11-25 14:32:37.119302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.082 [2024-11-25 14:32:37.119332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.082 [2024-11-25 14:32:37.119341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.082 [2024-11-25 14:32:37.119510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.082 [2024-11-25 14:32:37.119664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.082 [2024-11-25 14:32:37.119671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.082 [2024-11-25 14:32:37.119678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.082 [2024-11-25 14:32:37.119684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.082 [2024-11-25 14:32:37.131457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.082 [2024-11-25 14:32:37.131938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.082 [2024-11-25 14:32:37.131952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.083 [2024-11-25 14:32:37.131958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.083 [2024-11-25 14:32:37.132110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.083 [2024-11-25 14:32:37.132265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.083 [2024-11-25 14:32:37.132272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.083 [2024-11-25 14:32:37.132277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.083 [2024-11-25 14:32:37.132282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.083 [2024-11-25 14:32:37.144176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.083 [2024-11-25 14:32:37.144635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.083 [2024-11-25 14:32:37.144648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.083 [2024-11-25 14:32:37.144654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.083 [2024-11-25 14:32:37.144804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.083 [2024-11-25 14:32:37.144954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.083 [2024-11-25 14:32:37.144960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.083 [2024-11-25 14:32:37.144965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.083 [2024-11-25 14:32:37.144970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.083 [2024-11-25 14:32:37.156862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.083 [2024-11-25 14:32:37.157427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.083 [2024-11-25 14:32:37.157457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.083 [2024-11-25 14:32:37.157466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.083 [2024-11-25 14:32:37.157635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.083 [2024-11-25 14:32:37.157788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.083 [2024-11-25 14:32:37.157794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.083 [2024-11-25 14:32:37.157800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.083 [2024-11-25 14:32:37.157805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.345 [2024-11-25 14:32:37.169571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.345 [2024-11-25 14:32:37.169894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.345 [2024-11-25 14:32:37.169908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.345 [2024-11-25 14:32:37.169918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.345 [2024-11-25 14:32:37.170069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.345 [2024-11-25 14:32:37.170224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.345 [2024-11-25 14:32:37.170230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.345 [2024-11-25 14:32:37.170235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.345 [2024-11-25 14:32:37.170240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.345 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:32.345 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:34:32.345 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:32.345 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:32.345 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:32.345 [2024-11-25 14:32:37.182286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.345 [2024-11-25 14:32:37.182755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.345 [2024-11-25 14:32:37.182785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.345 [2024-11-25 14:32:37.182794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.345 [2024-11-25 14:32:37.182961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.345 [2024-11-25 14:32:37.183115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.345 [2024-11-25 14:32:37.183121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.345 [2024-11-25 14:32:37.183129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.345 [2024-11-25 14:32:37.183135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.345 [2024-11-25 14:32:37.194899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.345 [2024-11-25 14:32:37.195401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.345 [2024-11-25 14:32:37.195416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.345 [2024-11-25 14:32:37.195422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.345 [2024-11-25 14:32:37.195573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.345 [2024-11-25 14:32:37.195725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.345 [2024-11-25 14:32:37.195730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.345 [2024-11-25 14:32:37.195735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.345 [2024-11-25 14:32:37.195740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.345 [2024-11-25 14:32:37.207636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.345 [2024-11-25 14:32:37.208227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.345 [2024-11-25 14:32:37.208258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.345 [2024-11-25 14:32:37.208272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.345 [2024-11-25 14:32:37.208438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.345 [2024-11-25 14:32:37.208593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.345 [2024-11-25 14:32:37.208599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.345 [2024-11-25 14:32:37.208605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.345 [2024-11-25 14:32:37.208611] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.345 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:32.345 [2024-11-25 14:32:37.220379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.345 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:32.345 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.345 [2024-11-25 14:32:37.220964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.345 [2024-11-25 14:32:37.220995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.345 [2024-11-25 14:32:37.221004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.345 [2024-11-25 14:32:37.221177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.345 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:32.345 [2024-11-25 14:32:37.221332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.345 [2024-11-25 14:32:37.221339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.345 [2024-11-25 14:32:37.221344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.345 [2024-11-25 14:32:37.221350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.345 [2024-11-25 14:32:37.224887] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:32.345 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.345 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:32.345 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.345 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:32.345 [2024-11-25 14:32:37.233114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.345 [2024-11-25 14:32:37.233672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.345 [2024-11-25 14:32:37.233701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.345 [2024-11-25 14:32:37.233710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.345 [2024-11-25 14:32:37.233876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.345 [2024-11-25 14:32:37.234030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.346 [2024-11-25 14:32:37.234036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.346 [2024-11-25 14:32:37.234046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.346 [2024-11-25 14:32:37.234052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.346 [2024-11-25 14:32:37.245810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.346 [2024-11-25 14:32:37.246446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.346 [2024-11-25 14:32:37.246476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.346 [2024-11-25 14:32:37.246485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.346 [2024-11-25 14:32:37.246652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.346 [2024-11-25 14:32:37.246806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.346 [2024-11-25 14:32:37.246812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.346 [2024-11-25 14:32:37.246818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.346 [2024-11-25 14:32:37.246824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.346 Malloc0 00:34:32.346 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.346 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:32.346 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.346 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:32.346 [2024-11-25 14:32:37.258444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.346 [2024-11-25 14:32:37.258867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.346 [2024-11-25 14:32:37.258897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.346 [2024-11-25 14:32:37.258906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.346 [2024-11-25 14:32:37.259073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.346 [2024-11-25 14:32:37.259230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.346 [2024-11-25 14:32:37.259237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.346 [2024-11-25 14:32:37.259243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.346 [2024-11-25 14:32:37.259249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.346 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.346 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:32.346 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.346 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:32.346 [2024-11-25 14:32:37.271142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.346 [2024-11-25 14:32:37.271748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.346 [2024-11-25 14:32:37.271779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.346 [2024-11-25 14:32:37.271791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.346 [2024-11-25 14:32:37.271958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.346 [2024-11-25 14:32:37.272112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.346 [2024-11-25 14:32:37.272118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.346 [2024-11-25 14:32:37.272123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.346 [2024-11-25 14:32:37.272129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.346 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.346 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:32.346 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.346 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:32.346 [2024-11-25 14:32:37.283887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.346 [2024-11-25 14:32:37.284270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:32.346 [2024-11-25 14:32:37.284299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe27080 with addr=10.0.0.2, port=4420 00:34:32.346 [2024-11-25 14:32:37.284308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe27080 is same with the state(6) to be set 00:34:32.346 [2024-11-25 14:32:37.284477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe27080 (9): Bad file descriptor 00:34:32.346 [2024-11-25 14:32:37.284631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:32.346 [2024-11-25 14:32:37.284638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:32.346 [2024-11-25 14:32:37.284643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:32.346 [2024-11-25 14:32:37.284649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:32.346 [2024-11-25 14:32:37.286891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:32.346 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.346 14:32:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3617086 00:34:32.346 [2024-11-25 14:32:37.296548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:32.346 [2024-11-25 14:32:37.325928] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:34:33.863 4690.29 IOPS, 18.32 MiB/s [2024-11-25T13:32:39.896Z] 5720.38 IOPS, 22.35 MiB/s [2024-11-25T13:32:40.837Z] 6514.44 IOPS, 25.45 MiB/s [2024-11-25T13:32:42.226Z] 7164.40 IOPS, 27.99 MiB/s [2024-11-25T13:32:43.168Z] 7702.45 IOPS, 30.09 MiB/s [2024-11-25T13:32:44.113Z] 8135.83 IOPS, 31.78 MiB/s [2024-11-25T13:32:45.055Z] 8514.85 IOPS, 33.26 MiB/s [2024-11-25T13:32:45.998Z] 8815.93 IOPS, 34.44 MiB/s 00:34:40.908 Latency(us) 00:34:40.908 [2024-11-25T13:32:45.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:40.908 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:40.908 Verification LBA range: start 0x0 length 0x4000 00:34:40.908 Nvme1n1 : 15.01 9079.80 35.47 13362.32 0.00 5684.72 552.96 13380.27 00:34:40.908 [2024-11-25T13:32:45.998Z] =================================================================================================================== 00:34:40.908 [2024-11-25T13:32:45.998Z] Total : 9079.80 35.47 13362.32 0.00 5684.72 552.96 13380.27 00:34:40.908 14:32:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:40.908 14:32:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:40.908 14:32:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.908 14:32:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:40.908 14:32:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.908 14:32:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:40.908 14:32:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:40.908 14:32:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:40.908 14:32:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:34:40.908 14:32:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:40.908 14:32:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:34:40.908 14:32:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:40.908 14:32:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:40.908 rmmod nvme_tcp 00:34:40.908 rmmod nvme_fabrics 00:34:40.908 rmmod nvme_keyring 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3618238 ']' 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3618238 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3618238 ']' 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3618238 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3618238 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3618238' 00:34:41.169 killing process with pid 3618238 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3618238 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3618238 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:41.169 14:32:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:43.714 14:32:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:43.714 00:34:43.714 real 0m28.335s 00:34:43.714 user 1m3.492s 00:34:43.714 sys 0m7.814s 00:34:43.714 14:32:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:43.714 14:32:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:43.714 ************************************ 00:34:43.714 END TEST nvmf_bdevperf 00:34:43.714 ************************************ 00:34:43.714 14:32:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:43.714 14:32:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:43.714 14:32:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.715 ************************************ 00:34:43.715 START TEST nvmf_target_disconnect 00:34:43.715 ************************************ 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:43.715 * Looking for test storage... 00:34:43.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:43.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.715 --rc genhtml_branch_coverage=1 00:34:43.715 --rc genhtml_function_coverage=1 00:34:43.715 --rc genhtml_legend=1 00:34:43.715 --rc geninfo_all_blocks=1 00:34:43.715 --rc geninfo_unexecuted_blocks=1 00:34:43.715 00:34:43.715 ' 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:43.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.715 --rc genhtml_branch_coverage=1 00:34:43.715 --rc genhtml_function_coverage=1 00:34:43.715 --rc genhtml_legend=1 00:34:43.715 --rc geninfo_all_blocks=1 00:34:43.715 --rc geninfo_unexecuted_blocks=1 00:34:43.715 00:34:43.715 ' 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:43.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.715 --rc genhtml_branch_coverage=1 00:34:43.715 --rc genhtml_function_coverage=1 00:34:43.715 --rc genhtml_legend=1 00:34:43.715 --rc geninfo_all_blocks=1 00:34:43.715 --rc geninfo_unexecuted_blocks=1 00:34:43.715 00:34:43.715 ' 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:43.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.715 --rc genhtml_branch_coverage=1 00:34:43.715 --rc genhtml_function_coverage=1 00:34:43.715 --rc genhtml_legend=1 00:34:43.715 --rc geninfo_all_blocks=1 00:34:43.715 --rc geninfo_unexecuted_blocks=1 00:34:43.715 00:34:43.715 ' 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:43.715 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:43.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:34:43.716 14:32:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:51.856 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:51.856 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:51.856 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:51.856 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:51.857 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:51.857 14:32:55 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:51.857 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:51.857 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:34:51.857 00:34:51.857 --- 10.0.0.2 ping statistics --- 00:34:51.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:51.857 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:51.857 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:51.857 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:34:51.857 00:34:51.857 --- 10.0.0.1 ping statistics --- 00:34:51.857 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:51.857 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:51.857 ************************************ 00:34:51.857 START TEST nvmf_target_disconnect_tc1 00:34:51.857 ************************************ 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:51.857 [2024-11-25 14:32:56.349689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:51.857 [2024-11-25 14:32:56.349794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc77ad0 with addr=10.0.0.2, port=4420 00:34:51.857 [2024-11-25 14:32:56.349823] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:51.857 [2024-11-25 14:32:56.349844] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:51.857 [2024-11-25 14:32:56.349852] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:34:51.857 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:51.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:51.857 Initializing NVMe Controllers 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:51.857 00:34:51.857 real 0m0.145s 00:34:51.857 user 0m0.062s 00:34:51.857 sys 0m0.083s 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:51.857 ************************************ 00:34:51.857 END TEST nvmf_target_disconnect_tc1 00:34:51.857 ************************************ 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:51.857 ************************************ 00:34:51.857 START TEST nvmf_target_disconnect_tc2 00:34:51.857 ************************************ 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:51.857 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:51.858 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:51.858 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:51.858 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:51.858 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3624409 00:34:51.858 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3624409 00:34:51.858 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3624409 ']' 00:34:51.858 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:51.858 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:51.858 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:51.858 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:51.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:51.858 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:51.858 14:32:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:51.858 [2024-11-25 14:32:56.514529] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:34:51.858 [2024-11-25 14:32:56.514591] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:51.858 [2024-11-25 14:32:56.614283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:51.858 [2024-11-25 14:32:56.666447] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:51.858 [2024-11-25 14:32:56.666499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:51.858 [2024-11-25 14:32:56.666508] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:51.858 [2024-11-25 14:32:56.666515] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:51.858 [2024-11-25 14:32:56.666522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:51.858 [2024-11-25 14:32:56.668558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:51.858 [2024-11-25 14:32:56.668719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:51.858 [2024-11-25 14:32:56.668882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:51.858 [2024-11-25 14:32:56.668882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:52.430 Malloc0 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:52.430 [2024-11-25 14:32:57.421641] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:52.430 [2024-11-25 14:32:57.462031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3624505 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:52.430 14:32:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:55.000 14:32:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3624409 00:34:55.000 14:32:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Write completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Write completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Write completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Write completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Write completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Write completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Write completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Write completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Write completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Write completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 [2024-11-25 14:32:59.501497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Write completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Write completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Write completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Write completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 Read completed with error (sct=0, sc=8) 00:34:55.000 starting I/O failed 00:34:55.000 [2024-11-25 14:32:59.501883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:55.000 [2024-11-25 14:32:59.502409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.000 [2024-11-25 14:32:59.502475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.000 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.502892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.502905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.503449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.503506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.503852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.503866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.504394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.504451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.504797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.504813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.505037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.505048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.505335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.505349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.505633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.505645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.505856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.505867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.506217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.506229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.506444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.506456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.506766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.506778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.506999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.507013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.507314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.507335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.507640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.507652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.507867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.507878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.508181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.508195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.508428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.508441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.508815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.508827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.509146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.509163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.509486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.509498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.509819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.509834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.510154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.510172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.510508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.510520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.510860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.510871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.511193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.511206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.511425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.511437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.511807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.511819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.512128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.001 [2024-11-25 14:32:59.512140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.001 qpair failed and we were unable to recover it. 00:34:55.001 [2024-11-25 14:32:59.512488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.512500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.512803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.512817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.513152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.513167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.513433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.513445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.513774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.513786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.514078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.514090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.514409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.514422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.514781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.514794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.515091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.515103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.515380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.515392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.515698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.515709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.516014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.516025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.516335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.516348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.516675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.516688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.516997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.517008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.517341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.517353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.517669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.517681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.517778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.517790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.518113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.518125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.518435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.518445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.518763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.518776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.519081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.519091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.519488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.519499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.519681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.519692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.520030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.520045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.520386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.520398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.520721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.520731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.520958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.520968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.002 qpair failed and we were unable to recover it. 00:34:55.002 [2024-11-25 14:32:59.521286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.002 [2024-11-25 14:32:59.521298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.521597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.521607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.521802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.521814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.522040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.522052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.522351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.522361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.522683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.522695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.523014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.523024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.523342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.523353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.523662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.523672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.523964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.523974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.524310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.524321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.524637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.524648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.524972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.524982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.525217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.525228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.525541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.525552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.525763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.525774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.526119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.526130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.526414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.526427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.526729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.526740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.527128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.527140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.527463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.527478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.527802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.527814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.528142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.528155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.528487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.528500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.528729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.528742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.529052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.529065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.529423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.529436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.529770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.529784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.530101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.003 [2024-11-25 14:32:59.530114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.003 qpair failed and we were unable to recover it. 00:34:55.003 [2024-11-25 14:32:59.530475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.530489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.530832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.530846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.531074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.531087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.531325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.531340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.531664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.531678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.531874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.531888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.532067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.532083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.532421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.532435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.532744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.532758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.533116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.533130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.533476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.533491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.533629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.533643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.533974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.533988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.534297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.534311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.534704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.534716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.535067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.535080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.535422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.535436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.535747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.535761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.536060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.536072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.536397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.536410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.536793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.536805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.537115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.537128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.537453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.537466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.537782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.537794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.538165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.538180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.538509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.538521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.538884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.538897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.539114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.539126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.539464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.539479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.004 [2024-11-25 14:32:59.539806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.004 [2024-11-25 14:32:59.539819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.004 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.540224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.540239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.540570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.540583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.540907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.540919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.541286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.541299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.541700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.541717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.541948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.541960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.542297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.542310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.542632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.542648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.542964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.542981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.543310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.543327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.543715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.543733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.544086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.544103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.544480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.544498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.544848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.544865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.545229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.545246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.545585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.545602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.545940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.545956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.546270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.546288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.546608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.546626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.546945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.546961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.547297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.547315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.547562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.547578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.547909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.547926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.548256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.548273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.548597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.548615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.548916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.548933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.549270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.549290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.549618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.005 [2024-11-25 14:32:59.549635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.005 qpair failed and we were unable to recover it. 00:34:55.005 [2024-11-25 14:32:59.549958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.006 [2024-11-25 14:32:59.549976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.006 qpair failed and we were unable to recover it. 00:34:55.006 [2024-11-25 14:32:59.550296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.006 [2024-11-25 14:32:59.550314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.006 qpair failed and we were unable to recover it. 00:34:55.006 [2024-11-25 14:32:59.550643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.006 [2024-11-25 14:32:59.550660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.006 qpair failed and we were unable to recover it. 00:34:55.006 [2024-11-25 14:32:59.550989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.006 [2024-11-25 14:32:59.551006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.006 qpair failed and we were unable to recover it. 00:34:55.006 [2024-11-25 14:32:59.551332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.006 [2024-11-25 14:32:59.551350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.006 qpair failed and we were unable to recover it. 00:34:55.006 [2024-11-25 14:32:59.551672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.006 [2024-11-25 14:32:59.551688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.006 qpair failed and we were unable to recover it. 00:34:55.006 [2024-11-25 14:32:59.552006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.006 [2024-11-25 14:32:59.552025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.006 qpair failed and we were unable to recover it. 00:34:55.006 [2024-11-25 14:32:59.552256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.006 [2024-11-25 14:32:59.552274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.006 qpair failed and we were unable to recover it. 00:34:55.006 [2024-11-25 14:32:59.552677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.006 [2024-11-25 14:32:59.552695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.006 qpair failed and we were unable to recover it. 00:34:55.006 [2024-11-25 14:32:59.552984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.006 [2024-11-25 14:32:59.553005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.006 qpair failed and we were unable to recover it. 00:34:55.006 [2024-11-25 14:32:59.553136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.006 [2024-11-25 14:32:59.553156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.006 qpair failed and we were unable to recover it. 00:34:55.006 [2024-11-25 14:32:59.553548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.006 [2024-11-25 14:32:59.553569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.006 qpair failed and we were unable to recover it. 00:34:55.006 [2024-11-25 14:32:59.553913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.006 [2024-11-25 14:32:59.553935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.006 qpair failed and we were unable to recover it. 00:34:55.006 [2024-11-25 14:32:59.554301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.006 [2024-11-25 14:32:59.554324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.006 qpair failed and we were unable to recover it. 00:34:55.006 [2024-11-25 14:32:59.554673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.006 [2024-11-25 14:32:59.554694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.006 qpair failed and we were unable to recover it. 00:34:55.006 [2024-11-25 14:32:59.554911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.006 [2024-11-25 14:32:59.554932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.006 qpair failed and we were unable to recover it. 00:34:55.006 [2024-11-25 14:32:59.555342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.006 [2024-11-25 14:32:59.555370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.006 qpair failed and we were unable to recover it. 00:34:55.006 [2024-11-25 14:32:59.555730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.006 [2024-11-25 14:32:59.555751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.006 qpair failed and we were unable to recover it. 00:34:55.006 [2024-11-25 14:32:59.555976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.006 [2024-11-25 14:32:59.556000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.006 qpair failed and we were unable to recover it. 00:34:55.006 [2024-11-25 14:32:59.556346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.006 [2024-11-25 14:32:59.556369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.006 qpair failed and we were unable to recover it. 00:34:55.006 [2024-11-25 14:32:59.556719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.006 [2024-11-25 14:32:59.556739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.006 qpair failed and we were unable to recover it. 00:34:55.006 [2024-11-25 14:32:59.557078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.006 [2024-11-25 14:32:59.557100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.006 qpair failed and we were unable to recover it. 00:34:55.006 [2024-11-25 14:32:59.557473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.557495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.557818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.557839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.558204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.558226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.558557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.558581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.558952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.558974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.559298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.559320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.559658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.559680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.560030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.560052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.560391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.560413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.560743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.560766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.561105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.561126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.561492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.561514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.561765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.561786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.562022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.562043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.562396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.562418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.562742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.562765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.563109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.563131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.563445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.563467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.563669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.563692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.563926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.563949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.564287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.564309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.564645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.564668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.564824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.564847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.565193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.565216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.565513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.565534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.565900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.565920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.566249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.566271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.566600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.566628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.566993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.567022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.007 [2024-11-25 14:32:59.567353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.007 [2024-11-25 14:32:59.567383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.007 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.567752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.567782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.568125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.568154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.568523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.568553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.568913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.568941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.569306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.569342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.569686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.569716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.570088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.570119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.570514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.570545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.570920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.570949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.571319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.571349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.571700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.571729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.572094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.572124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.572508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.572538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.572906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.572935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.573298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.573328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.573705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.573735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.574105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.574136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.574473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.574503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.574856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.574885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.575133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.575183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.575565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.575595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.576029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.576057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.576392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.576424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.576705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.576734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.577106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.577134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.577373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.577403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.577656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.577686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.578060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.578088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.578478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.578508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.578854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.008 [2024-11-25 14:32:59.578885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.008 qpair failed and we were unable to recover it. 00:34:55.008 [2024-11-25 14:32:59.579257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.579287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.579652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.579681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.580043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.580071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.580443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.580473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.580720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.580752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.581122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.581154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.581523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.581553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.581919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.581949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.582309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.582340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.582684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.582719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.583054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.583083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.583443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.583474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.583838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.583868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.584213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.584243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.584546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.584582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.584933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.584961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.585210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.585240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.585621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.585651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.586008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.586037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.586412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.586442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.586682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.586711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.586939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.586970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.587322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.587353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.587714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.587743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.588112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.588141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.588511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.588541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.588785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.588817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.588986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.589015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.589387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.589418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.589787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.589817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.009 [2024-11-25 14:32:59.590185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.009 [2024-11-25 14:32:59.590217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.009 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.590581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.590609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.590972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.591001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.591374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.591405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.591755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.591783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.592151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.592190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.592558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.592587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.592948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.592977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.593338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.593367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.593729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.593757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.594120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.594150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.594525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.594556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.594880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.594909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.595183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.595213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.595569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.595599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.595973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.596002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.596277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.596308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.596538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.596571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.596932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.596964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.597322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.597352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.597659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.597689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.598061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.598090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.598459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.598490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.598847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.598875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.599239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.599276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.599651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.599680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.600042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.600070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.600413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.600444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.600875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.600904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.601258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.601290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.010 [2024-11-25 14:32:59.601658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.010 [2024-11-25 14:32:59.601688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.010 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.602017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.602048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.602422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.602452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.602809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.602839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.603093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.603125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.603508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.603539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.603902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.603930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.604304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.604336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.604683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.604714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.605057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.605086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.605434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.605465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.605823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.605853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.606099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.606127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.606515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.606546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.606915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.606944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.607304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.607333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.607581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.607613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.607956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.607986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.608211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.608242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.608610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.608639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.609003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.609031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.609313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.609344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.609704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.609733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.610096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.610126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.610506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.610537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.610773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.610804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.611180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.611212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.611584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.611612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.611984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.612013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.011 [2024-11-25 14:32:59.612384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.011 [2024-11-25 14:32:59.612415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.011 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.612768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.012 [2024-11-25 14:32:59.612797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.012 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.613183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.012 [2024-11-25 14:32:59.613213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.012 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.613458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.012 [2024-11-25 14:32:59.613487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.012 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.613844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.012 [2024-11-25 14:32:59.613873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.012 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.614253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.012 [2024-11-25 14:32:59.614290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.012 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.614657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.012 [2024-11-25 14:32:59.614688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.012 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.615070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.012 [2024-11-25 14:32:59.615098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.012 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.615336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.012 [2024-11-25 14:32:59.615368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.012 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.615735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.012 [2024-11-25 14:32:59.615764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.012 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.616128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.012 [2024-11-25 14:32:59.616156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.012 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.616535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.012 [2024-11-25 14:32:59.616565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.012 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.616934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.012 [2024-11-25 14:32:59.616964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.012 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.617309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.012 [2024-11-25 14:32:59.617341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.012 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.617766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.012 [2024-11-25 14:32:59.617796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.012 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.618083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.012 [2024-11-25 14:32:59.618112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.012 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.618408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.012 [2024-11-25 14:32:59.618438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.012 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.618798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.012 [2024-11-25 14:32:59.618828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.012 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.619190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.012 [2024-11-25 14:32:59.619221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.012 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.619602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.012 [2024-11-25 14:32:59.619631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.012 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.619877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.012 [2024-11-25 14:32:59.619907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.012 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.620279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.012 [2024-11-25 14:32:59.620311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.012 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.620661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.012 [2024-11-25 14:32:59.620690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.012 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.621053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.012 [2024-11-25 14:32:59.621084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.012 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.621430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.012 [2024-11-25 14:32:59.621460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.012 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.621821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.012 [2024-11-25 14:32:59.621852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.012 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.622193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.012 [2024-11-25 14:32:59.622223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.012 qpair failed and we were unable to recover it. 00:34:55.012 [2024-11-25 14:32:59.622596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.622625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.622996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.623024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.623398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.623429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.623830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.623860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.624223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.624259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.624651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.624680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.625041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.625070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.625407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.625437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.625801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.625831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.626196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.626227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.626585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.626614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.626974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.627003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.627391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.627420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.627753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.627782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.628149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.628206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.628574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.628605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.628982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.629012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.629385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.629416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.629672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.629706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.630096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.630126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.630536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.630567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.630926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.630955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.631322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.631352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.631718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.631747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.632115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.632143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.632510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.632540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.632874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.632904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.633267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.633299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.633547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.633577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.633851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.633879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.013 [2024-11-25 14:32:59.634254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.013 [2024-11-25 14:32:59.634284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.013 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.634709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.634738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.635141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.635180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.635418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.635451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.635837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.635866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.636215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.636245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.636606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.636635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.636993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.637022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.637393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.637424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.637765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.637793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.638184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.638215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.638452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.638481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.638736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.638765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.639118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.639148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.639514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.639544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.639910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.639939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.640283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.640314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.640672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.640701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.641058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.641085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.641466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.641497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.641863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.641892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.642257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.642286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.642642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.642670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.643035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.643064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.643429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.643459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.643822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.643851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.644215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.644245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.644599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.644628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.644994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.645030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.645396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.014 [2024-11-25 14:32:59.645427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.014 qpair failed and we were unable to recover it. 00:34:55.014 [2024-11-25 14:32:59.645785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.645814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.646205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.646237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.646609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.646638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.646887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.646917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.647281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.647311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.647672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.647702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.647948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.647978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.648248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.648279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.648662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.648692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.649058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.649088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.649440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.649471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.649837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.649866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.650224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.650254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.650609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.650640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.650894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.650922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.651237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.651268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.651618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.651647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.652015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.652044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.652411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.652440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.652783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.652812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.653053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.653081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.653452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.653483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.653853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.653883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.654242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.654272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.654623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.654651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.655016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.655046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.655422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.655453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.655809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.655838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.656204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.656234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.656621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.015 [2024-11-25 14:32:59.656649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.015 qpair failed and we were unable to recover it. 00:34:55.015 [2024-11-25 14:32:59.657027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.657057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.657404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.657436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.657858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.657888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.658130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.658167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.658413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.658443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.658819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.658848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.659104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.659132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.659526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.659556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.659918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.659953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.660317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.660347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.660703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.660732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.661062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.661092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.661421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.661452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.661810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.661840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.662090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.662118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.662527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.662559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.662929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.662958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.663321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.663352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.663696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.663726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.664143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.664181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.664566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.664595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.664965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.664994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.665351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.665382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.665736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.665765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.666129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.666166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.666400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.666432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.666819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.666848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.667210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.667240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.667606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.667636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.016 [2024-11-25 14:32:59.668010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.016 [2024-11-25 14:32:59.668040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.016 qpair failed and we were unable to recover it. 00:34:55.017 [2024-11-25 14:32:59.668431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.017 [2024-11-25 14:32:59.668461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.017 qpair failed and we were unable to recover it. 00:34:55.017 [2024-11-25 14:32:59.668832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.017 [2024-11-25 14:32:59.668861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.017 qpair failed and we were unable to recover it. 00:34:55.017 [2024-11-25 14:32:59.669217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.017 [2024-11-25 14:32:59.669248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.017 qpair failed and we were unable to recover it. 00:34:55.017 [2024-11-25 14:32:59.669615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.017 [2024-11-25 14:32:59.669643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.017 qpair failed and we were unable to recover it. 00:34:55.017 [2024-11-25 14:32:59.670020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.017 [2024-11-25 14:32:59.670048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.017 qpair failed and we were unable to recover it. 00:34:55.017 [2024-11-25 14:32:59.670402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.017 [2024-11-25 14:32:59.670433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.017 qpair failed and we were unable to recover it. 00:34:55.017 [2024-11-25 14:32:59.670861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.017 [2024-11-25 14:32:59.670892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.017 qpair failed and we were unable to recover it. 00:34:55.017 [2024-11-25 14:32:59.671231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.017 [2024-11-25 14:32:59.671263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.017 qpair failed and we were unable to recover it. 00:34:55.017 [2024-11-25 14:32:59.671616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.017 [2024-11-25 14:32:59.671645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.017 qpair failed and we were unable to recover it. 00:34:55.017 [2024-11-25 14:32:59.672012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.017 [2024-11-25 14:32:59.672040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.017 qpair failed and we were unable to recover it. 00:34:55.017 [2024-11-25 14:32:59.672411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.017 [2024-11-25 14:32:59.672441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.017 qpair failed and we were unable to recover it. 00:34:55.017 [2024-11-25 14:32:59.672783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.017 [2024-11-25 14:32:59.672812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.017 qpair failed and we were unable to recover it. 00:34:55.017 [2024-11-25 14:32:59.673200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.017 [2024-11-25 14:32:59.673232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.017 qpair failed and we were unable to recover it. 00:34:55.017 [2024-11-25 14:32:59.673651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.017 [2024-11-25 14:32:59.673681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.017 qpair failed and we were unable to recover it. 00:34:55.017 [2024-11-25 14:32:59.674033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.017 [2024-11-25 14:32:59.674062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.017 qpair failed and we were unable to recover it. 00:34:55.017 [2024-11-25 14:32:59.674314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.017 [2024-11-25 14:32:59.674344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.017 qpair failed and we were unable to recover it. 00:34:55.017 [2024-11-25 14:32:59.674706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.017 [2024-11-25 14:32:59.674734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.017 qpair failed and we were unable to recover it. 00:34:55.017 [2024-11-25 14:32:59.675096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.017 [2024-11-25 14:32:59.675126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.017 qpair failed and we were unable to recover it. 00:34:55.017 [2024-11-25 14:32:59.675518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.017 [2024-11-25 14:32:59.675556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.017 qpair failed and we were unable to recover it. 00:34:55.017 [2024-11-25 14:32:59.675890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.017 [2024-11-25 14:32:59.675919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.017 qpair failed and we were unable to recover it. 00:34:55.017 [2024-11-25 14:32:59.676286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.017 [2024-11-25 14:32:59.676317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.017 qpair failed and we were unable to recover it. 00:34:55.017 [2024-11-25 14:32:59.676692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.017 [2024-11-25 14:32:59.676721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.017 qpair failed and we were unable to recover it. 00:34:55.017 [2024-11-25 14:32:59.676956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.017 [2024-11-25 14:32:59.676984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.017 qpair failed and we were unable to recover it. 00:34:55.017 [2024-11-25 14:32:59.677328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.017 [2024-11-25 14:32:59.677358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.017 qpair failed and we were unable to recover it. 00:34:55.017 [2024-11-25 14:32:59.677708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.017 [2024-11-25 14:32:59.677739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.017 qpair failed and we were unable to recover it. 00:34:55.017 [2024-11-25 14:32:59.677998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.017 [2024-11-25 14:32:59.678030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.678442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.678473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.678824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.678853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.679083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.679114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.679488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.679518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.679886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.679916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.680253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.680283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.680663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.680693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.681057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.681087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.681455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.681485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.681821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.681852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.682191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.682222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.682575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.682605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.682845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.682873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.683272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.683303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.683596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.683625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.683985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.684016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.684391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.684422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.684794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.684823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.685181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.685211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.685580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.685609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.685968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.685996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.686357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.686387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.686746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.686776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.687138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.687176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.687549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.687578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.687937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.687966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.688335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.688365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.688726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.018 [2024-11-25 14:32:59.688754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.018 qpair failed and we were unable to recover it. 00:34:55.018 [2024-11-25 14:32:59.689103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.689133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.689493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.689524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.689758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.689789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.690021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.690052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.690437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.690474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.690814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.690844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.691182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.691214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.691612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.691642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.692003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.692033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.692335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.692365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.692737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.692767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.693019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.693049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.693401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.693432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.693807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.693836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.694200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.694231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.694597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.694626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.694862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.694892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.695240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.695270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.695647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.695677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.696035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.696064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.696428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.696459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.696697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.696727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.697105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.697134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.697600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.697630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.697932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.697962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.698212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.698242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.698583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.698613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.698946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.698975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.699323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.699353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.699715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.019 [2024-11-25 14:32:59.699744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.019 qpair failed and we were unable to recover it. 00:34:55.019 [2024-11-25 14:32:59.700114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.700145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.700530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.700562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.700922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.700951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.701314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.701343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.701711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.701740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.702085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.702114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.702443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.702478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.702896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.702925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.703278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.703309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.703670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.703698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.704057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.704086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.704341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.704371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.704727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.704756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.705118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.705148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.705525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.705562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.705933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.705961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.706323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.706352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.706716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.706745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.707105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.707135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.707499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.707528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.707863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.707892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.708140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.708178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.708474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.708503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.708747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.708778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.709156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.709199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.709570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.709599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.709978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.710007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.710379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.710409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.710751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.710780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.020 [2024-11-25 14:32:59.711144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.020 [2024-11-25 14:32:59.711183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.020 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.711550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.711579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.711940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.711969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.712334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.712363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.712729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.712757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.713137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.713173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.713531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.713560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.713932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.713960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.714326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.714357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.714720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.714748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.715131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.715176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.715464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.715493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.715849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.715880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.716241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.716272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.716640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.716668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.717030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.717060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.717407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.717437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.717808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.717837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.718197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.718228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.718474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.718504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.718883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.718913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.719144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.719189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.719573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.719603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.719963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.719992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.720383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.720413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.720752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.720787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.721128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.721166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.721507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.721536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.721891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.021 [2024-11-25 14:32:59.721921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.021 qpair failed and we were unable to recover it. 00:34:55.021 [2024-11-25 14:32:59.722301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.722332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.722698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.722729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.723074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.723103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.723344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.723374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.723720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.723748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.724116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.724145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.724406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.724435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.724672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.724701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.725057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.725087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.725323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.725356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.725715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.725746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.726115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.726144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.726508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.726539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.726762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.726791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.727141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.727191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.727540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.727569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.727930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.727960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.728389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.728420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.728776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.728804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.729181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.729211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.729549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.729579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.729953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.729982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.730322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.730353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.730723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.730763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.731102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.731132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.731512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.731542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.731893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.731922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.732181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.732213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.732598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.732627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.732886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.732915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.022 [2024-11-25 14:32:59.733266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.022 [2024-11-25 14:32:59.733297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.022 qpair failed and we were unable to recover it. 00:34:55.023 [2024-11-25 14:32:59.733648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.023 [2024-11-25 14:32:59.733677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.023 qpair failed and we were unable to recover it. 00:34:55.023 [2024-11-25 14:32:59.734038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.023 [2024-11-25 14:32:59.734068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.023 qpair failed and we were unable to recover it. 00:34:55.023 [2024-11-25 14:32:59.734427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.023 [2024-11-25 14:32:59.734458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.023 qpair failed and we were unable to recover it. 00:34:55.023 [2024-11-25 14:32:59.734830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.023 [2024-11-25 14:32:59.734858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.023 qpair failed and we were unable to recover it. 00:34:55.023 [2024-11-25 14:32:59.735228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.023 [2024-11-25 14:32:59.735258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.023 qpair failed and we were unable to recover it. 00:34:55.023 [2024-11-25 14:32:59.735606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.023 [2024-11-25 14:32:59.735634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.023 qpair failed and we were unable to recover it. 00:34:55.023 [2024-11-25 14:32:59.736005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.023 [2024-11-25 14:32:59.736034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.023 qpair failed and we were unable to recover it. 00:34:55.023 [2024-11-25 14:32:59.736401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.023 [2024-11-25 14:32:59.736430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.023 qpair failed and we were unable to recover it. 00:34:55.023 [2024-11-25 14:32:59.736785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.023 [2024-11-25 14:32:59.736815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.023 qpair failed and we were unable to recover it. 00:34:55.023 [2024-11-25 14:32:59.737181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.023 [2024-11-25 14:32:59.737211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.023 qpair failed and we were unable to recover it. 00:34:55.023 [2024-11-25 14:32:59.737519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.023 [2024-11-25 14:32:59.737547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.023 qpair failed and we were unable to recover it. 00:34:55.023 [2024-11-25 14:32:59.737904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.023 [2024-11-25 14:32:59.737935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.023 qpair failed and we were unable to recover it. 00:34:55.023 [2024-11-25 14:32:59.738295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.023 [2024-11-25 14:32:59.738324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.023 qpair failed and we were unable to recover it. 00:34:55.023 [2024-11-25 14:32:59.738683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.023 [2024-11-25 14:32:59.738713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.023 qpair failed and we were unable to recover it. 00:34:55.023 [2024-11-25 14:32:59.739061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.023 [2024-11-25 14:32:59.739090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.023 qpair failed and we were unable to recover it. 00:34:55.023 [2024-11-25 14:32:59.739405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.023 [2024-11-25 14:32:59.739435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.023 qpair failed and we were unable to recover it. 00:34:55.023 [2024-11-25 14:32:59.739802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.023 [2024-11-25 14:32:59.739830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.023 qpair failed and we were unable to recover it. 00:34:55.023 [2024-11-25 14:32:59.740204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.023 [2024-11-25 14:32:59.740234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.023 qpair failed and we were unable to recover it. 00:34:55.023 [2024-11-25 14:32:59.740619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.023 [2024-11-25 14:32:59.740648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.023 qpair failed and we were unable to recover it. 00:34:55.023 [2024-11-25 14:32:59.741038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.023 [2024-11-25 14:32:59.741067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.023 qpair failed and we were unable to recover it. 00:34:55.023 [2024-11-25 14:32:59.741420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.023 [2024-11-25 14:32:59.741451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.023 qpair failed and we were unable to recover it. 00:34:55.023 [2024-11-25 14:32:59.741810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.023 [2024-11-25 14:32:59.741841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.023 qpair failed and we were unable to recover it. 00:34:55.023 [2024-11-25 14:32:59.742200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.023 [2024-11-25 14:32:59.742230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.023 qpair failed and we were unable to recover it. 00:34:55.023 [2024-11-25 14:32:59.742585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.023 [2024-11-25 14:32:59.742614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.023 qpair failed and we were unable to recover it. 00:34:55.023 [2024-11-25 14:32:59.742862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.742892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.743230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.743266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.743607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.743635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.743998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.744027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.744340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.744369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.744742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.744771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.745141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.745181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.745549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.745577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.745937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.745972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.746224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.746255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.746596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.746625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.747006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.747036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.747405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.747437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.747643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.747672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.748043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.748071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.748445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.748475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.748819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.748847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.749265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.749295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.749659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.749690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.750056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.750085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.750339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.750370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.750756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.750787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.751142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.751180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.751582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.751611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.752076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.752115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.752397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.752430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.752821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.752852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.753241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.753274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.753675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.753705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.754066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.024 [2024-11-25 14:32:59.754097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.024 qpair failed and we were unable to recover it. 00:34:55.024 [2024-11-25 14:32:59.754507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.754543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.754830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.754858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.755246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.755282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.755647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.755676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.756031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.756061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.756445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.756477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.756765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.756793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.757171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.757202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.757545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.757575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.757943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.757972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.758414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.758444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.758797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.758827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.759087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.759115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.759520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.759551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.759920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.759948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.760208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.760242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.760639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.760670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.760996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.761026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.761385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.761423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.761781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.761809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.762179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.762211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.762567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.762597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.762973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.763003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.763364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.763396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.763774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.763803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.764088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.764116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.764357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.764389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.764743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.764773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.025 [2024-11-25 14:32:59.765028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.025 [2024-11-25 14:32:59.765058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.025 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.765426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.765457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.765824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.765853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.766216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.766245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.766616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.766645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.766873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.766904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.767236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.767267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.767638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.767667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.768038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.768066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.768468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.768499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.768861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.768891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.769260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.769290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.769696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.769725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.770123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.770154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.770486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.770516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.770885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.770915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.771150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.771189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.771461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.771491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.771837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.771867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.772209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.772241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.772644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.772673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.773015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.773045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.773312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.773343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.773589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.773621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.773972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.774002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.774401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.774432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.774788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.774824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.775196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.775227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.775662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.775690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.775922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.775950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.776178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.776218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.776589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.776620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.026 qpair failed and we were unable to recover it. 00:34:55.026 [2024-11-25 14:32:59.776989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.026 [2024-11-25 14:32:59.777017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.777399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.777430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.777771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.777799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.778155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.778194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.778557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.778586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.778946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.778977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.779339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.779370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.779502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.779532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.779801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.779829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.780196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.780227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.780485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.780513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.780888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.780916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.781295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.781327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.781723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.781755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.782127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.782156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.782531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.782560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.782903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.782932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.783335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.783366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.783704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.783733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.784101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.784131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.784493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.784524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.784877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.784906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.785284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.785316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.785544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.785575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.785943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.785973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.786332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.786364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.786734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.786763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.786993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.787023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.027 qpair failed and we were unable to recover it. 00:34:55.027 [2024-11-25 14:32:59.787382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.027 [2024-11-25 14:32:59.787412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.787772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.787803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.788170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.788202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.790285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.790351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.790787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.790823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.791201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.791240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.791502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.791532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.791917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.791946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.792311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.792342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.792704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.792734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.793108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.793147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.793415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.793445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.793819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.793848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.794212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.794243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.794624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.794653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.795027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.795056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.795419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.795451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.795694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.795724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.796072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.796102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.796504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.796536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.796894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.796924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.797301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.797332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.797711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.797740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.798145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.798186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.798546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.798576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.798848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.798879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.799231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.799263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.799608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.799639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.799991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.800022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.800385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.800417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.028 [2024-11-25 14:32:59.800852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.028 [2024-11-25 14:32:59.800882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.028 qpair failed and we were unable to recover it. 00:34:55.029 [2024-11-25 14:32:59.801246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.029 [2024-11-25 14:32:59.801277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.029 qpair failed and we were unable to recover it. 00:34:55.029 [2024-11-25 14:32:59.801645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.029 [2024-11-25 14:32:59.801675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.029 qpair failed and we were unable to recover it. 00:34:55.029 [2024-11-25 14:32:59.801997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.029 [2024-11-25 14:32:59.802028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.029 qpair failed and we were unable to recover it. 00:34:55.029 [2024-11-25 14:32:59.802284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.029 [2024-11-25 14:32:59.802315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.029 qpair failed and we were unable to recover it. 00:34:55.029 [2024-11-25 14:32:59.802697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.029 [2024-11-25 14:32:59.802727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.029 qpair failed and we were unable to recover it. 00:34:55.029 [2024-11-25 14:32:59.803090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.029 [2024-11-25 14:32:59.803118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.029 qpair failed and we were unable to recover it. 00:34:55.029 [2024-11-25 14:32:59.803487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.029 [2024-11-25 14:32:59.803517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.029 qpair failed and we were unable to recover it. 00:34:55.029 [2024-11-25 14:32:59.803882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.029 [2024-11-25 14:32:59.803911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.029 qpair failed and we were unable to recover it. 00:34:55.029 [2024-11-25 14:32:59.804278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.029 [2024-11-25 14:32:59.804309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.029 qpair failed and we were unable to recover it. 00:34:55.029 [2024-11-25 14:32:59.804692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.029 [2024-11-25 14:32:59.804723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.029 qpair failed and we were unable to recover it. 00:34:55.029 [2024-11-25 14:32:59.805091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.029 [2024-11-25 14:32:59.805121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.029 qpair failed and we were unable to recover it. 00:34:55.029 [2024-11-25 14:32:59.805600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.029 [2024-11-25 14:32:59.805631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.029 qpair failed and we were unable to recover it. 00:34:55.029 [2024-11-25 14:32:59.806008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.029 [2024-11-25 14:32:59.806038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.029 qpair failed and we were unable to recover it. 00:34:55.029 [2024-11-25 14:32:59.806424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.029 [2024-11-25 14:32:59.806456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.029 qpair failed and we were unable to recover it. 00:34:55.029 [2024-11-25 14:32:59.806825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.029 [2024-11-25 14:32:59.806857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.029 qpair failed and we were unable to recover it. 00:34:55.029 [2024-11-25 14:32:59.807216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.029 [2024-11-25 14:32:59.807248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.029 qpair failed and we were unable to recover it. 00:34:55.029 [2024-11-25 14:32:59.807603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.029 [2024-11-25 14:32:59.807633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.030 qpair failed and we were unable to recover it. 00:34:55.030 [2024-11-25 14:32:59.808013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.030 [2024-11-25 14:32:59.808042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.030 qpair failed and we were unable to recover it. 00:34:55.030 [2024-11-25 14:32:59.808388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.030 [2024-11-25 14:32:59.808420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.030 qpair failed and we were unable to recover it. 00:34:55.030 [2024-11-25 14:32:59.808650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.030 [2024-11-25 14:32:59.808686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.030 qpair failed and we were unable to recover it. 00:34:55.030 [2024-11-25 14:32:59.809036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.030 [2024-11-25 14:32:59.809067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.030 qpair failed and we were unable to recover it. 00:34:55.030 [2024-11-25 14:32:59.809414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.030 [2024-11-25 14:32:59.809444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.030 qpair failed and we were unable to recover it. 00:34:55.030 [2024-11-25 14:32:59.809816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.030 [2024-11-25 14:32:59.809846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.030 qpair failed and we were unable to recover it. 00:34:55.030 [2024-11-25 14:32:59.810221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.030 [2024-11-25 14:32:59.810252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.030 qpair failed and we were unable to recover it. 00:34:55.030 [2024-11-25 14:32:59.810644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.030 [2024-11-25 14:32:59.810674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.030 qpair failed and we were unable to recover it. 00:34:55.030 [2024-11-25 14:32:59.810919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.030 [2024-11-25 14:32:59.810948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.030 qpair failed and we were unable to recover it. 00:34:55.030 [2024-11-25 14:32:59.811323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.030 [2024-11-25 14:32:59.811353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.030 qpair failed and we were unable to recover it. 00:34:55.030 [2024-11-25 14:32:59.811571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.030 [2024-11-25 14:32:59.811600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.030 qpair failed and we were unable to recover it. 00:34:55.030 [2024-11-25 14:32:59.811963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.030 [2024-11-25 14:32:59.811992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.030 qpair failed and we were unable to recover it. 00:34:55.030 [2024-11-25 14:32:59.812339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.030 [2024-11-25 14:32:59.812369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.030 qpair failed and we were unable to recover it. 00:34:55.030 [2024-11-25 14:32:59.812750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.030 [2024-11-25 14:32:59.812781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.030 qpair failed and we were unable to recover it. 00:34:55.030 [2024-11-25 14:32:59.813184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.030 [2024-11-25 14:32:59.813215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.030 qpair failed and we were unable to recover it. 00:34:55.030 [2024-11-25 14:32:59.813617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.030 [2024-11-25 14:32:59.813648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.030 qpair failed and we were unable to recover it. 00:34:55.030 [2024-11-25 14:32:59.814016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.030 [2024-11-25 14:32:59.814046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.030 qpair failed and we were unable to recover it. 00:34:55.030 [2024-11-25 14:32:59.814278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.030 [2024-11-25 14:32:59.814309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.030 qpair failed and we were unable to recover it. 00:34:55.030 [2024-11-25 14:32:59.814676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.030 [2024-11-25 14:32:59.814706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.030 qpair failed and we were unable to recover it. 00:34:55.030 [2024-11-25 14:32:59.815083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.030 [2024-11-25 14:32:59.815113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.030 qpair failed and we were unable to recover it. 00:34:55.030 [2024-11-25 14:32:59.815353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.030 [2024-11-25 14:32:59.815384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.030 qpair failed and we were unable to recover it. 00:34:55.030 [2024-11-25 14:32:59.815724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.030 [2024-11-25 14:32:59.815754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.030 qpair failed and we were unable to recover it. 00:34:55.030 [2024-11-25 14:32:59.816126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.030 [2024-11-25 14:32:59.816156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.030 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.816538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.816568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.816923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.816953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.817319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.817349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.817583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.817612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.817971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.818002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.818378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.818408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.818638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.818668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.819049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.819078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.819430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.819461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.819819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.819851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.820198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.820228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.820640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.820669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.820958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.820988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.821203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.821234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.821571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.821602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.821869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.821903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.822270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.822304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.822640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.822669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.823047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.823076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.823460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.823497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.823857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.823886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.824267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.824297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.824567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.824595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.824966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.824994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.825372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.825405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.825763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.825792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.826156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.826195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.826535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.826564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.826843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.826872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.031 qpair failed and we were unable to recover it. 00:34:55.031 [2024-11-25 14:32:59.827089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.031 [2024-11-25 14:32:59.827118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.827387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.827419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.827769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.827799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.828147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.828188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.828562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.828592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.828959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.828988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.829296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.829327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.829653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.829683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.830059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.830088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.830256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.830288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.830584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.830615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.830997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.831026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.831371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.831402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.831776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.831804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.832174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.832204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.832552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.832582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.832946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.832974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.833343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.833374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.833732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.833761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.834127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.834156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.834515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.834545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.834905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.834935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.835294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.835324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.835650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.835680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.836033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.836062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.836407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.836439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.836800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.836830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.837106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.837135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.837379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.837412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.837764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.837795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.032 [2024-11-25 14:32:59.838127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.032 [2024-11-25 14:32:59.838175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.032 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.838517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.838546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.838900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.838929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.839279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.839310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.839684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.839712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.840085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.840113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.840470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.840500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.840754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.840782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.841131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.841168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.841395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.841425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.841772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.841801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.842171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.842202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.842556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.842585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.842944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.842974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.843103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.843137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.843554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.843586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.843939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.843969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.844192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.844227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.844484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.844515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.844889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.844918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.845198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.845228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.845587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.845616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.845967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.845998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.846342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.846372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.846727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.846756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.847101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.847132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.847508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.847537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.847898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.847929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.848189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.033 [2024-11-25 14:32:59.848221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.033 qpair failed and we were unable to recover it. 00:34:55.033 [2024-11-25 14:32:59.848624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.848653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.848997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.849027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.849385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.849416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.849783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.849813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.850180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.850210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.850432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.850465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.850828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.850857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.851207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.851237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.851591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.851620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.851981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.852011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.852354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.852384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.852652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.852687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.853039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.853068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.853408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.853440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.853798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.853827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.854191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.854221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.854566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.854604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.854970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.855000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.855359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.855389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.855748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.855777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.856113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.856142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.856401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.856431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.856778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.856809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.857119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.857149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.857495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.857525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.857896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.857925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.858279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.858310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.858668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.858697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.859066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.859095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.034 [2024-11-25 14:32:59.859456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.034 [2024-11-25 14:32:59.859488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.034 qpair failed and we were unable to recover it. 00:34:55.035 [2024-11-25 14:32:59.859860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.035 [2024-11-25 14:32:59.859889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.035 qpair failed and we were unable to recover it. 00:34:55.035 [2024-11-25 14:32:59.860259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.035 [2024-11-25 14:32:59.860289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.035 qpair failed and we were unable to recover it. 00:34:55.035 [2024-11-25 14:32:59.860654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.035 [2024-11-25 14:32:59.860685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.035 qpair failed and we were unable to recover it. 00:34:55.035 [2024-11-25 14:32:59.861062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.035 [2024-11-25 14:32:59.861091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.035 qpair failed and we were unable to recover it. 00:34:55.035 [2024-11-25 14:32:59.861478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.035 [2024-11-25 14:32:59.861508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.035 qpair failed and we were unable to recover it. 00:34:55.035 [2024-11-25 14:32:59.861750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.035 [2024-11-25 14:32:59.861780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.035 qpair failed and we were unable to recover it. 00:34:55.035 [2024-11-25 14:32:59.862122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.035 [2024-11-25 14:32:59.862153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.035 qpair failed and we were unable to recover it. 00:34:55.035 [2024-11-25 14:32:59.862557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.035 [2024-11-25 14:32:59.862587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.035 qpair failed and we were unable to recover it. 00:34:55.035 [2024-11-25 14:32:59.862968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.035 [2024-11-25 14:32:59.862999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.035 qpair failed and we were unable to recover it. 00:34:55.035 [2024-11-25 14:32:59.863367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.035 [2024-11-25 14:32:59.863398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.035 qpair failed and we were unable to recover it. 00:34:55.035 [2024-11-25 14:32:59.863769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.035 [2024-11-25 14:32:59.863798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.035 qpair failed and we were unable to recover it. 00:34:55.035 [2024-11-25 14:32:59.864157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.035 [2024-11-25 14:32:59.864195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.035 qpair failed and we were unable to recover it. 00:34:55.035 [2024-11-25 14:32:59.864549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.035 [2024-11-25 14:32:59.864579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.035 qpair failed and we were unable to recover it. 00:34:55.035 [2024-11-25 14:32:59.864949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.035 [2024-11-25 14:32:59.864977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.035 qpair failed and we were unable to recover it. 00:34:55.035 [2024-11-25 14:32:59.865341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.035 [2024-11-25 14:32:59.865371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.035 qpair failed and we were unable to recover it. 00:34:55.035 [2024-11-25 14:32:59.865729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.035 [2024-11-25 14:32:59.865758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.035 qpair failed and we were unable to recover it. 00:34:55.035 [2024-11-25 14:32:59.866124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.035 [2024-11-25 14:32:59.866154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.035 qpair failed and we were unable to recover it. 00:34:55.035 [2024-11-25 14:32:59.866569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.035 [2024-11-25 14:32:59.866598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.035 qpair failed and we were unable to recover it. 00:34:55.035 [2024-11-25 14:32:59.866960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.035 [2024-11-25 14:32:59.866989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.035 qpair failed and we were unable to recover it. 00:34:55.035 [2024-11-25 14:32:59.867275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.035 [2024-11-25 14:32:59.867306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.035 qpair failed and we were unable to recover it. 00:34:55.035 [2024-11-25 14:32:59.867646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.035 [2024-11-25 14:32:59.867675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.035 qpair failed and we were unable to recover it. 00:34:55.035 [2024-11-25 14:32:59.868047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.035 [2024-11-25 14:32:59.868083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.035 qpair failed and we were unable to recover it. 00:34:55.035 [2024-11-25 14:32:59.868426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.035 [2024-11-25 14:32:59.868456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.035 qpair failed and we were unable to recover it. 00:34:55.035 [2024-11-25 14:32:59.868825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.035 [2024-11-25 14:32:59.868855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.035 qpair failed and we were unable to recover it. 00:34:55.035 [2024-11-25 14:32:59.869216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.035 [2024-11-25 14:32:59.869246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.869610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.036 [2024-11-25 14:32:59.869638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.869955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.036 [2024-11-25 14:32:59.869983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.870356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.036 [2024-11-25 14:32:59.870386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.870747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.036 [2024-11-25 14:32:59.870777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.871141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.036 [2024-11-25 14:32:59.871180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.871548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.036 [2024-11-25 14:32:59.871577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.871952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.036 [2024-11-25 14:32:59.871982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.872346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.036 [2024-11-25 14:32:59.872377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.872736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.036 [2024-11-25 14:32:59.872766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.873095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.036 [2024-11-25 14:32:59.873124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.873497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.036 [2024-11-25 14:32:59.873527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.873892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.036 [2024-11-25 14:32:59.873921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.874364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.036 [2024-11-25 14:32:59.874396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.874755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.036 [2024-11-25 14:32:59.874785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.875173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.036 [2024-11-25 14:32:59.875203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.875569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.036 [2024-11-25 14:32:59.875599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.875964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.036 [2024-11-25 14:32:59.875994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.876363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.036 [2024-11-25 14:32:59.876393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.876738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.036 [2024-11-25 14:32:59.876768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.877173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.036 [2024-11-25 14:32:59.877203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.877582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.036 [2024-11-25 14:32:59.877614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.877950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.036 [2024-11-25 14:32:59.877980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.878222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.036 [2024-11-25 14:32:59.878254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.878645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.036 [2024-11-25 14:32:59.878675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.879035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.036 [2024-11-25 14:32:59.879065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.879411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.036 [2024-11-25 14:32:59.879443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.036 qpair failed and we were unable to recover it. 00:34:55.036 [2024-11-25 14:32:59.879807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.879836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.880198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.880230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.880583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.880613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.880985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.881015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.881408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.881439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.881779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.881808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.882191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.882222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.882620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.882649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.883016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.883044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.883452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.883482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.883834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.883870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.884227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.884256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.884609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.884641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.885037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.885069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.885425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.885458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.885815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.885846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.886216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.886249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.886490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.886528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.886864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.886898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.887232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.887268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.887634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.887670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.888031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.888060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.888414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.888450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.888790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.888823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.889256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.889287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.889633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.889668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.890037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.890069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.890416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.890449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.890824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.037 [2024-11-25 14:32:59.890856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.037 qpair failed and we were unable to recover it. 00:34:55.037 [2024-11-25 14:32:59.891197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.891227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.891567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.891600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.891977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.892010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.892379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.892411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.892703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.892734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.893115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.893146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.893513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.893542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.893902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.893931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.894309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.894341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.894701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.894730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.895091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.895120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.895483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.895513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.895869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.895898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.896258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.896289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.896649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.896679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.897055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.897085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.897461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.897492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.897850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.897880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.898241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.898271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.898658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.898687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.899037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.899065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.899425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.899461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.899818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.899847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.900216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.900247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.900508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.900537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.900804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.900833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.901184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.901214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.901613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.901642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.038 [2024-11-25 14:32:59.901990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.038 [2024-11-25 14:32:59.902020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.038 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.902384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.902415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.902768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.902799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.903128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.903157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.903509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.903538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.903892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.903921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.904312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.904343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.904689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.904720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.905092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.905120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.905516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.905546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.905887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.905917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.906283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.906313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.906684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.906712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.907077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.907108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.907447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.907476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.907840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.907871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.908237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.908268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.908515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.908546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.908928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.908957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.909304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.909335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.909705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.909734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.910102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.910131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.910506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.910535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.910900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.910929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.911293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.911324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.911682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.911710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.912076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.912106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.912512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.912542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.912895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.912925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.913279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.913309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.913647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.913677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.039 [2024-11-25 14:32:59.914036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.039 [2024-11-25 14:32:59.914065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.039 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.914412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.914450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.914816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.914851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.915208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.915238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.915605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.915634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.915983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.916012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.916385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.916416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.916782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.916810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.917183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.917213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.917599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.917629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.917995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.918024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.918394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.918424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.918794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.918823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.919202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.919232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.919587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.919624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.919993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.920023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.920366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.920397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.920761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.920790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.921175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.921207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.921570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.921599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.921975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.922005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.922362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.922393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.922766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.922795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.923179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.923209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.923458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.923489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.923848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.923879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.924237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.924268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.924525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.924554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.924926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.924955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.925322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.925359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.925715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.925743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.926125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.926155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.040 qpair failed and we were unable to recover it. 00:34:55.040 [2024-11-25 14:32:59.926510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.040 [2024-11-25 14:32:59.926540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.041 qpair failed and we were unable to recover it. 00:34:55.041 [2024-11-25 14:32:59.926902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.041 [2024-11-25 14:32:59.926931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.041 qpair failed and we were unable to recover it. 00:34:55.041 [2024-11-25 14:32:59.927285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.041 [2024-11-25 14:32:59.927316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.041 qpair failed and we were unable to recover it. 00:34:55.041 [2024-11-25 14:32:59.927676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.041 [2024-11-25 14:32:59.927705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.041 qpair failed and we were unable to recover it. 00:34:55.041 [2024-11-25 14:32:59.928071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.041 [2024-11-25 14:32:59.928100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.041 qpair failed and we were unable to recover it. 00:34:55.041 [2024-11-25 14:32:59.928444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.041 [2024-11-25 14:32:59.928474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.041 qpair failed and we were unable to recover it. 00:34:55.041 [2024-11-25 14:32:59.928832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.041 [2024-11-25 14:32:59.928861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.041 qpair failed and we were unable to recover it. 00:34:55.041 [2024-11-25 14:32:59.929228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.041 [2024-11-25 14:32:59.929258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.041 qpair failed and we were unable to recover it. 00:34:55.041 [2024-11-25 14:32:59.929615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.041 [2024-11-25 14:32:59.929645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.041 qpair failed and we were unable to recover it. 00:34:55.041 [2024-11-25 14:32:59.930006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.041 [2024-11-25 14:32:59.930034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.041 qpair failed and we were unable to recover it. 00:34:55.041 [2024-11-25 14:32:59.930420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.041 [2024-11-25 14:32:59.930451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.041 qpair failed and we were unable to recover it. 00:34:55.041 [2024-11-25 14:32:59.930817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.041 [2024-11-25 14:32:59.930847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.041 qpair failed and we were unable to recover it. 00:34:55.041 [2024-11-25 14:32:59.931227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.041 [2024-11-25 14:32:59.931258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.041 qpair failed and we were unable to recover it. 00:34:55.041 [2024-11-25 14:32:59.931625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.041 [2024-11-25 14:32:59.931653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.041 qpair failed and we were unable to recover it. 00:34:55.041 [2024-11-25 14:32:59.932044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.041 [2024-11-25 14:32:59.932073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.041 qpair failed and we were unable to recover it. 00:34:55.041 [2024-11-25 14:32:59.932424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.041 [2024-11-25 14:32:59.932456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.041 qpair failed and we were unable to recover it. 00:34:55.041 [2024-11-25 14:32:59.932808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.041 [2024-11-25 14:32:59.932839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.041 qpair failed and we were unable to recover it. 00:34:55.041 [2024-11-25 14:32:59.933193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.041 [2024-11-25 14:32:59.933224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.041 qpair failed and we were unable to recover it. 00:34:55.041 [2024-11-25 14:32:59.933591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.041 [2024-11-25 14:32:59.933620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.041 qpair failed and we were unable to recover it. 00:34:55.041 [2024-11-25 14:32:59.933983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.041 [2024-11-25 14:32:59.934012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.041 qpair failed and we were unable to recover it. 00:34:55.041 [2024-11-25 14:32:59.934365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.041 [2024-11-25 14:32:59.934394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.041 qpair failed and we were unable to recover it. 00:34:55.041 [2024-11-25 14:32:59.934740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.041 [2024-11-25 14:32:59.934769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.041 qpair failed and we were unable to recover it. 00:34:55.041 [2024-11-25 14:32:59.935128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.935176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.935562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.935593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.935960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.935989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.936350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.936382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.936739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.936768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.937147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.937186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.937586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.937616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.937978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.938008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.938341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.938371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.938730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.938759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.939135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.939172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.939513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.939542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.939904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.939933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.940293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.940324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.940667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.940696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.941062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.941103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.941485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.941516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.941886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.941918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.942270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.942300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.942670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.942699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.943060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.943090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.943444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.943474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.943816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.943844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.944219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.944250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.944597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.944628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.944991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.945019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.945427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.945457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.945817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.945846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.946224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.946255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.946606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.042 [2024-11-25 14:32:59.946636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.042 qpair failed and we were unable to recover it. 00:34:55.042 [2024-11-25 14:32:59.947018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.947046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.947416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.947446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.947820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.947849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.948221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.948252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.948503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.948535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.948901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.948931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.949300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.949330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.949688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.949718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.950118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.950146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.950486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.950516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.950881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.950909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.951282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.951314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.951698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.951727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.952100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.952128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.952505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.952536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.952883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.952913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.953277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.953306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.953656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.953686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.954041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.954072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.954438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.954468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.954836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.954866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.955227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.955258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.955628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.955658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.956018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.956047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.956390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.956421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.956855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.956890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.957224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.957263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.957590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.957619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.957993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.958023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.043 qpair failed and we were unable to recover it. 00:34:55.043 [2024-11-25 14:32:59.958394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.043 [2024-11-25 14:32:59.958425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.958827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.958856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.959229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.959259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.959629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.959658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.959909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.959938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.960226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.960257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.960628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.960658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.960994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.961023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.961386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.961417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.961774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.961803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.962182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.962212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.962562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.962592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.962955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.962986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.963344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.963375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.963744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.963773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.964136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.964178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.964528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.964557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.964918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.964947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.965314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.965344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.965699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.965729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.966090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.966119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.966371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.966400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.966755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.966784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.967180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.967212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.967588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.967617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.967979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.968008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.968394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.968425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.968783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.968812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.969239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.969270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.969646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.969675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.044 [2024-11-25 14:32:59.970034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.044 [2024-11-25 14:32:59.970063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.044 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.970420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.970450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.970712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.970740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.971086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.971115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.971457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.971488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.971735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.971763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.972119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.972155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.972536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.972565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.972931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.972959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.973329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.973360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.973742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.973770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.974019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.974052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.974410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.974441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.974809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.974838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.975086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.975114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.975512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.975542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.975899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.975929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.976297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.976328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.978833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.978904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.979341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.979379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.979776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.979807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.980141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.980180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.980544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.980574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.980949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.980978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.981229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.981258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.981628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.981658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.982036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.982064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.982405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.982436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.982801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.982830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.983194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.983225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.045 [2024-11-25 14:32:59.983581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.045 [2024-11-25 14:32:59.983610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.045 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.984045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.984074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.984323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.984354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.984748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.984778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.985142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.985181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.985541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.985571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.985931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.985960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.986327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.986356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.986705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.986734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.987092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.987120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.987485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.987515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.987886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.987914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.988285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.988315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.988684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.988713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.989046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.989074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.989422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.989453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.989808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.989843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.990203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.990234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.990633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.990661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.991031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.991061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.991430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.991460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.991884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.991912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.992287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.992317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.992703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.992731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.992988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.993016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.993357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.993388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.993756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.993786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.994143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.994204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.994576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.994606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.994967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.994996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.995361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.995393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.995757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.046 [2024-11-25 14:32:59.995786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.046 qpair failed and we were unable to recover it. 00:34:55.046 [2024-11-25 14:32:59.996152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.047 [2024-11-25 14:32:59.996194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.047 qpair failed and we were unable to recover it. 00:34:55.047 [2024-11-25 14:32:59.996622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.047 [2024-11-25 14:32:59.996651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.047 qpair failed and we were unable to recover it. 00:34:55.047 [2024-11-25 14:32:59.997012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.047 [2024-11-25 14:32:59.997040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.047 qpair failed and we were unable to recover it. 00:34:55.047 [2024-11-25 14:32:59.997398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.047 [2024-11-25 14:32:59.997429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.047 qpair failed and we were unable to recover it. 00:34:55.047 [2024-11-25 14:32:59.997796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.047 [2024-11-25 14:32:59.997824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.047 qpair failed and we were unable to recover it. 00:34:55.047 [2024-11-25 14:32:59.998184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.047 [2024-11-25 14:32:59.998213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.047 qpair failed and we were unable to recover it. 00:34:55.047 [2024-11-25 14:32:59.998566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.047 [2024-11-25 14:32:59.998595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.047 qpair failed and we were unable to recover it. 00:34:55.047 [2024-11-25 14:32:59.998958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.047 [2024-11-25 14:32:59.998987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.047 qpair failed and we were unable to recover it. 00:34:55.047 [2024-11-25 14:32:59.999344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.047 [2024-11-25 14:32:59.999373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.047 qpair failed and we were unable to recover it. 00:34:55.047 [2024-11-25 14:32:59.999739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.047 [2024-11-25 14:32:59.999768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.047 qpair failed and we were unable to recover it. 00:34:55.047 [2024-11-25 14:33:00.000109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.047 [2024-11-25 14:33:00.000139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.047 qpair failed and we were unable to recover it. 00:34:55.047 [2024-11-25 14:33:00.000508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.047 [2024-11-25 14:33:00.000539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.047 qpair failed and we were unable to recover it. 00:34:55.047 [2024-11-25 14:33:00.000897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.047 [2024-11-25 14:33:00.000927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.047 qpair failed and we were unable to recover it. 00:34:55.047 [2024-11-25 14:33:00.001276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.047 [2024-11-25 14:33:00.001308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.047 qpair failed and we were unable to recover it. 00:34:55.047 [2024-11-25 14:33:00.001706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.047 [2024-11-25 14:33:00.001737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.047 qpair failed and we were unable to recover it. 00:34:55.047 [2024-11-25 14:33:00.002022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.047 [2024-11-25 14:33:00.002052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.047 qpair failed and we were unable to recover it. 00:34:55.047 [2024-11-25 14:33:00.002305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.047 [2024-11-25 14:33:00.002336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.047 qpair failed and we were unable to recover it. 00:34:55.047 [2024-11-25 14:33:00.003102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.047 [2024-11-25 14:33:00.003145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.047 qpair failed and we were unable to recover it. 00:34:55.047 [2024-11-25 14:33:00.003592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.047 [2024-11-25 14:33:00.003623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.047 qpair failed and we were unable to recover it. 00:34:55.047 [2024-11-25 14:33:00.003883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.047 [2024-11-25 14:33:00.003913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.047 qpair failed and we were unable to recover it. 00:34:55.047 [2024-11-25 14:33:00.004337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.047 [2024-11-25 14:33:00.004368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.047 qpair failed and we were unable to recover it. 00:34:55.047 [2024-11-25 14:33:00.004716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.047 [2024-11-25 14:33:00.004746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.047 qpair failed and we were unable to recover it. 00:34:55.047 [2024-11-25 14:33:00.005134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.047 [2024-11-25 14:33:00.005176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.047 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.005561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.005592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.005935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.005972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.006333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.006364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.006713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.006743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.007119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.007149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.007522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.007553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.007917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.007945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.008230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.008262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.008526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.008558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.008828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.008858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.009220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.009250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.009632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.009662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.010035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.010067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.010438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.010469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.010830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.010861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.011236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.011268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.011506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.011535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.011941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.011971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.012336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.012368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.012764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.012795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.013185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.013216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.013588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.013618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.014038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.014067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.014472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.014505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.014872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.014903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.015129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.015167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.015450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.015482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.015917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.015946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.016367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.016400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.016771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.048 [2024-11-25 14:33:00.016802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.048 qpair failed and we were unable to recover it. 00:34:55.048 [2024-11-25 14:33:00.017050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.017082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.017351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.017382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.017760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.017790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.018049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.018078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.018382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.018415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.018818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.018848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.019107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.019136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.019498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.019529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.019887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.019917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.020299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.020331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.020734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.020764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.021124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.021180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.021521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.021552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.021916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.021947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.022234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.022266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.022552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.022582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.022956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.022986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.023235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.023266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.023656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.023685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.024057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.024088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.024496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.024528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.024800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.024830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.025218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.025249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.025490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.025519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.025666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.025697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.025904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.025932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.026174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.026206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.026518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.026550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.026878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.026909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.049 [2024-11-25 14:33:00.027115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.049 [2024-11-25 14:33:00.027146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.049 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.027331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.027361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.027665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.027696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.028020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.028051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.028486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.028519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.028878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.028913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.029217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.029249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.029638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.029668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.029907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.029938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.030325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.030356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.030722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.030751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.031105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.031135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.031454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.031484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.031820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.031848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.032226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.032257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.032572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.032603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.032872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.032902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.033146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.033184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.033603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.033632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.034016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.034045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.034420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.034451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.034675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.034704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.035130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.035196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.035551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.035581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.035960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.035992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.036365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.036396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.036747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.036783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.037137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.037178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.050 qpair failed and we were unable to recover it. 00:34:55.050 [2024-11-25 14:33:00.037534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.050 [2024-11-25 14:33:00.037564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.037914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.037946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.038240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.038271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.038607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.038637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.038981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.039013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.039419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.039451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.039847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.039876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.040239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.040271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.040614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.040645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.040997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.041027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.041380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.041411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.041743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.041773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.042003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.042036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.042210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.042240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.042630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.042659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.043008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.043037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.043423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.043454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.043821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.043850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.044230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.044261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.044608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.044638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.044999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.045028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.045372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.045402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.045752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.045781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.046124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.046172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.046506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.046536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.046858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.046889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.047243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.047273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.047508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.047542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.047863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.047893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.048235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.048267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.051 [2024-11-25 14:33:00.048643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.051 [2024-11-25 14:33:00.048672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.051 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.049059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.049091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.049415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.049445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.049806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.049836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.050215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.050259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.050587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.050619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.050958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.050988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.051354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.051386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.051734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.051766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.052105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.052136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.052513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.052548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.052928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.052957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.053316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.053347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.053686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.053716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.054080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.054115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.054362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.054393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.054823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.054859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.055228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.055258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.055614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.055645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.055896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.055928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.056259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.056290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.056643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.056680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.057045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.057075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.057339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.057370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.057637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.057666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.058017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.058047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.058426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.058457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.058778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.058808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.059084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.059115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.059493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.059527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.059895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.052 [2024-11-25 14:33:00.059924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.052 qpair failed and we were unable to recover it. 00:34:55.052 [2024-11-25 14:33:00.060253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.053 [2024-11-25 14:33:00.060285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.053 qpair failed and we were unable to recover it. 00:34:55.053 [2024-11-25 14:33:00.060611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.053 [2024-11-25 14:33:00.060642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.053 qpair failed and we were unable to recover it. 00:34:55.053 [2024-11-25 14:33:00.060959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.053 [2024-11-25 14:33:00.060989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.053 qpair failed and we were unable to recover it. 00:34:55.053 [2024-11-25 14:33:00.061332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.053 [2024-11-25 14:33:00.061363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.053 qpair failed and we were unable to recover it. 00:34:55.053 [2024-11-25 14:33:00.061711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.053 [2024-11-25 14:33:00.061740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.053 qpair failed and we were unable to recover it. 00:34:55.053 [2024-11-25 14:33:00.062113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.053 [2024-11-25 14:33:00.062142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.053 qpair failed and we were unable to recover it. 00:34:55.053 [2024-11-25 14:33:00.062408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.053 [2024-11-25 14:33:00.062439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.053 qpair failed and we were unable to recover it. 00:34:55.053 [2024-11-25 14:33:00.062760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.053 [2024-11-25 14:33:00.062790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.053 qpair failed and we were unable to recover it. 00:34:55.053 [2024-11-25 14:33:00.063151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.053 [2024-11-25 14:33:00.063191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.053 qpair failed and we were unable to recover it. 00:34:55.053 [2024-11-25 14:33:00.063545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.053 [2024-11-25 14:33:00.063577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.053 qpair failed and we were unable to recover it. 00:34:55.053 [2024-11-25 14:33:00.063942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.053 [2024-11-25 14:33:00.063972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.053 qpair failed and we were unable to recover it. 00:34:55.053 [2024-11-25 14:33:00.064338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.053 [2024-11-25 14:33:00.064369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.053 qpair failed and we were unable to recover it. 00:34:55.053 [2024-11-25 14:33:00.064723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.053 [2024-11-25 14:33:00.064752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.053 qpair failed and we were unable to recover it. 00:34:55.053 [2024-11-25 14:33:00.065113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.053 [2024-11-25 14:33:00.065150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.053 qpair failed and we were unable to recover it. 00:34:55.053 [2024-11-25 14:33:00.065499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.053 [2024-11-25 14:33:00.065530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.053 qpair failed and we were unable to recover it. 00:34:55.053 [2024-11-25 14:33:00.065890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.053 [2024-11-25 14:33:00.065919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.053 qpair failed and we were unable to recover it. 00:34:55.327 [2024-11-25 14:33:00.066290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.327 [2024-11-25 14:33:00.066324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.327 qpair failed and we were unable to recover it. 00:34:55.327 [2024-11-25 14:33:00.066641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.327 [2024-11-25 14:33:00.066674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.327 qpair failed and we were unable to recover it. 00:34:55.327 [2024-11-25 14:33:00.067044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.327 [2024-11-25 14:33:00.067074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.327 qpair failed and we were unable to recover it. 00:34:55.327 [2024-11-25 14:33:00.068885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.327 [2024-11-25 14:33:00.068949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.327 qpair failed and we were unable to recover it. 00:34:55.327 [2024-11-25 14:33:00.069222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.327 [2024-11-25 14:33:00.069259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.327 qpair failed and we were unable to recover it. 00:34:55.327 [2024-11-25 14:33:00.069629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.327 [2024-11-25 14:33:00.069660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.327 qpair failed and we were unable to recover it. 00:34:55.327 [2024-11-25 14:33:00.070038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.327 [2024-11-25 14:33:00.070066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.327 qpair failed and we were unable to recover it. 00:34:55.327 [2024-11-25 14:33:00.070240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.327 [2024-11-25 14:33:00.070290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.327 qpair failed and we were unable to recover it. 00:34:55.327 [2024-11-25 14:33:00.070653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.327 [2024-11-25 14:33:00.070683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.327 qpair failed and we were unable to recover it. 00:34:55.327 [2024-11-25 14:33:00.071040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.327 [2024-11-25 14:33:00.071069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.327 qpair failed and we were unable to recover it. 00:34:55.327 [2024-11-25 14:33:00.071229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.327 [2024-11-25 14:33:00.071261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.327 qpair failed and we were unable to recover it. 00:34:55.327 [2024-11-25 14:33:00.071677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.327 [2024-11-25 14:33:00.071708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.327 qpair failed and we were unable to recover it. 00:34:55.327 [2024-11-25 14:33:00.071962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.327 [2024-11-25 14:33:00.071991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.327 qpair failed and we were unable to recover it. 00:34:55.327 [2024-11-25 14:33:00.072281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.072312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.072575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.072605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.072883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.072913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.073117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.073147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.073323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.073354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.073704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.073735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.074046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.074076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.074326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.074358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.074738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.074768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.075135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.075188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.075560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.075589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.075941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.075972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.076230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.076261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.076527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.076558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.076851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.076881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.077250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.077280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.077651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.077681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.078040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.078071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.078423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.078456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.078841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.078872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.079120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.079150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.079529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.079559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.079879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.079909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.080170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.080200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.080593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.080629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.081035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.081065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.081365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.081395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.081599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.081631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.081970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.328 [2024-11-25 14:33:00.082001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.328 qpair failed and we were unable to recover it. 00:34:55.328 [2024-11-25 14:33:00.082370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.082401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.082768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.082797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.083237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.083271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.083648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.083680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.084048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.084078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.084456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.084487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.084886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.084916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.085323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.085354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.085599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.085631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.085997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.086028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.086385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.086417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.086796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.086825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.087184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.087216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.087461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.087491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.087810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.087841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.088214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.088244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.088622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.088651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.089002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.089031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.089375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.089406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.089764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.089794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.090188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.090244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.090617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.090634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.091028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.091044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.091423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.091439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.091765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.091779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.092115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.092128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.092365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.092382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.092719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.092733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.329 [2024-11-25 14:33:00.092957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.329 [2024-11-25 14:33:00.092970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.329 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.093330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.093345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.093680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.093698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.094048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.094067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.094409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.094429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.094779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.094806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.094987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.095007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.095371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.095392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.095734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.095752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.096076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.096093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.096403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.096424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.096652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.096670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.096991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.097010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.097252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.097271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.097602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.097619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.097964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.097981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.098310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.098332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.098624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.098642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.098865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.098881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.099316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.099348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.099783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.099800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.100142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.100181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.100507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.100524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.100853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.100872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.101113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.101131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.101551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.101577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.101928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.101944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.102282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.102301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.102643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.102658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.102989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.103006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.103337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.103354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.103706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.103723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.104059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.104075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.104410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.104427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.104759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.104777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.330 [2024-11-25 14:33:00.105002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.330 [2024-11-25 14:33:00.105019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.330 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.105334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.105351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.105701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.105717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.106039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.106057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.106374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.106391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.106682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.106698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.107064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.107082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.107407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.107426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.107787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.107803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.108107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.108126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.108447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.108470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.108830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.108853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.109101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.109139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.109527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.109552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.109879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.109902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.110119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.110142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.110489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.110514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.110880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.110903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.111287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.111315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.111694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.111718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.112085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.112108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.112452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.112488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.112833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.112856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.113211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.113254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.113617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.113640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.113974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.113996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.114338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.114360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.114571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.114601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.114744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.114768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.115101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.115124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.115477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.331 [2024-11-25 14:33:00.115500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.331 qpair failed and we were unable to recover it. 00:34:55.331 [2024-11-25 14:33:00.115847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.115872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.116308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.116331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.116669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.116692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.117022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.117049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.117400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.117424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.117790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.117811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.118142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.118180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.118541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.118562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.118886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.118907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.119233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.119259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.119624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.119646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.119980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.120002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.120336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.120363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.120687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.120716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.120968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.120996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.121357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.121397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.121741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.121772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.122088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.122117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.122481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.122514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.122881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.122911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.123273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.123305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.123684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.123718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.124064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.124093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.124464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.124503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.124899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.124932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.125315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.125346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.125708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.125740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.126113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.126144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.126392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.126425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.332 [2024-11-25 14:33:00.126798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.332 [2024-11-25 14:33:00.126832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.332 qpair failed and we were unable to recover it. 00:34:55.333 [2024-11-25 14:33:00.127221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.333 [2024-11-25 14:33:00.127255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.333 qpair failed and we were unable to recover it. 00:34:55.333 [2024-11-25 14:33:00.127668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.333 [2024-11-25 14:33:00.127698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.333 qpair failed and we were unable to recover it. 00:34:55.333 [2024-11-25 14:33:00.127960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.333 [2024-11-25 14:33:00.128000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.333 qpair failed and we were unable to recover it. 00:34:55.333 [2024-11-25 14:33:00.128261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.333 [2024-11-25 14:33:00.128294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.333 qpair failed and we were unable to recover it. 00:34:55.333 [2024-11-25 14:33:00.128683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.333 [2024-11-25 14:33:00.128713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.333 qpair failed and we were unable to recover it. 00:34:55.333 [2024-11-25 14:33:00.129076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.333 [2024-11-25 14:33:00.129122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.333 qpair failed and we were unable to recover it. 00:34:55.333 [2024-11-25 14:33:00.129556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.333 [2024-11-25 14:33:00.129588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.333 qpair failed and we were unable to recover it. 00:34:55.333 [2024-11-25 14:33:00.129970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.333 [2024-11-25 14:33:00.130000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.333 qpair failed and we were unable to recover it. 00:34:55.333 [2024-11-25 14:33:00.130254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.333 [2024-11-25 14:33:00.130295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.333 qpair failed and we were unable to recover it. 00:34:55.333 [2024-11-25 14:33:00.130541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.333 [2024-11-25 14:33:00.130571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.333 qpair failed and we were unable to recover it. 00:34:55.333 [2024-11-25 14:33:00.130901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.333 [2024-11-25 14:33:00.130930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.333 qpair failed and we were unable to recover it. 00:34:55.333 [2024-11-25 14:33:00.131291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.333 [2024-11-25 14:33:00.131327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.333 qpair failed and we were unable to recover it. 00:34:55.333 [2024-11-25 14:33:00.131739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.333 [2024-11-25 14:33:00.131771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.333 qpair failed and we were unable to recover it. 00:34:55.333 [2024-11-25 14:33:00.132125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.333 [2024-11-25 14:33:00.132154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.333 qpair failed and we were unable to recover it. 00:34:55.333 [2024-11-25 14:33:00.132562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.333 [2024-11-25 14:33:00.132596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.333 qpair failed and we were unable to recover it. 00:34:55.333 [2024-11-25 14:33:00.132962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.333 [2024-11-25 14:33:00.132992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.333 qpair failed and we were unable to recover it. 00:34:55.333 [2024-11-25 14:33:00.133348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.333 [2024-11-25 14:33:00.133381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.333 qpair failed and we were unable to recover it. 00:34:55.333 [2024-11-25 14:33:00.133745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.333 [2024-11-25 14:33:00.133777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.333 qpair failed and we were unable to recover it. 00:34:55.333 [2024-11-25 14:33:00.134140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.333 [2024-11-25 14:33:00.134185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.333 qpair failed and we were unable to recover it. 00:34:55.333 [2024-11-25 14:33:00.134449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.333 [2024-11-25 14:33:00.134479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.333 qpair failed and we were unable to recover it. 00:34:55.333 [2024-11-25 14:33:00.134862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.333 [2024-11-25 14:33:00.134895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.333 qpair failed and we were unable to recover it. 00:34:55.333 [2024-11-25 14:33:00.135139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.333 [2024-11-25 14:33:00.135213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.333 qpair failed and we were unable to recover it. 00:34:55.333 [2024-11-25 14:33:00.135575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.333 [2024-11-25 14:33:00.135605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.333 qpair failed and we were unable to recover it. 00:34:55.333 [2024-11-25 14:33:00.135843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.333 [2024-11-25 14:33:00.135891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.333 qpair failed and we were unable to recover it. 00:34:55.333 [2024-11-25 14:33:00.136258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.333 [2024-11-25 14:33:00.136290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.333 qpair failed and we were unable to recover it. 00:34:55.333 [2024-11-25 14:33:00.136655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.136685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.137046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.137077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.137493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.137524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.137890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.137922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.138271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.138305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.138689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.138719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.139080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.139108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.139365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.139400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.139786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.139816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.140185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.140226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.140567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.140599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.140963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.140993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.141340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.141373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.141629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.141664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.142027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.142056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.142401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.142434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.142796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.142828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.143134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.143178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.145340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.145412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.145749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.145785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.146177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.146227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.146542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.146593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.146857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.146899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.147214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.147276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.147567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.147613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.147914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.147966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.148287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.148337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.148628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.334 [2024-11-25 14:33:00.148686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.334 qpair failed and we were unable to recover it. 00:34:55.334 [2024-11-25 14:33:00.148875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.148931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.149284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.149361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.149703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.149801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.150237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.150286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.150693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.150739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.151123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.151182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.151573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.151604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.151952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.151981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.152336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.152377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.152720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.152750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.153099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.153129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.155136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.155215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.155633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.155669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.156039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.156070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.156454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.156492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.156836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.156866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.157226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.157259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.157539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.157570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.157859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.157889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.158245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.158276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.158620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.158650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.159046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.159076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.159529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.159562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.159918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.159949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.160291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.160322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.160694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.160724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.161092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.161123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.161469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.161502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.161858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.161888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.162252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.162284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.162643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.162673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.163002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.163032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.163388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.163421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.163802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.163831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.164191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.164222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.164584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.335 [2024-11-25 14:33:00.164613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.335 qpair failed and we were unable to recover it. 00:34:55.335 [2024-11-25 14:33:00.165011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.165043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.165253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.165292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.167153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.167229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.167604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.167640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.168012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.168042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.168389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.168421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.168772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.168802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.169178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.169209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.169495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.169526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.169868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.169900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.170237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.170269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.170656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.170689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.171050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.171080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.171433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.171473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.171807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.171838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.172213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.172245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.172597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.172629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.173031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.173060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.173472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.173505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.173863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.173893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.174168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.174199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.174564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.174594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.174913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.174943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.175298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.175329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.175671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.175702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.176037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.176067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.176408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.176440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.176803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.176833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.177191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.177222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.177623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.177653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.177995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.178025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.178397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.178428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.178777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.178808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.179048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.179079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.179436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.179467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.179854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.179883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.180130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.180170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.180564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.336 [2024-11-25 14:33:00.180595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.336 qpair failed and we were unable to recover it. 00:34:55.336 [2024-11-25 14:33:00.180948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.180978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.181343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.181375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.181644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.181680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.182030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.182060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.182324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.182355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.182694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.182724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.183083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.183113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.183614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.183645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.183986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.184018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.184308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.184340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.184758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.184788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.185134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.185253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.185615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.185645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.186001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.186031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.186416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.186448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.188257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.188319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.188765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.188802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.189078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.189112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.189595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.189628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.189973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.190002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.190475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.190507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.190870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.190899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.191238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.191269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.191633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.191664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.192029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.192058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.192507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.192538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.192932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.192961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.193329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.193359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.193743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.193773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.194131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.194171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.194523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.194555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.194901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.194930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.195303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.195336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.195599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.195628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.195987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.196018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.196388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.196419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.196674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.196708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.196963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.196992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.337 [2024-11-25 14:33:00.197357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.337 [2024-11-25 14:33:00.197389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.337 qpair failed and we were unable to recover it. 00:34:55.338 [2024-11-25 14:33:00.197727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.338 [2024-11-25 14:33:00.197758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.338 qpair failed and we were unable to recover it. 00:34:55.338 [2024-11-25 14:33:00.198116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.338 [2024-11-25 14:33:00.198147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.338 qpair failed and we were unable to recover it. 00:34:55.338 [2024-11-25 14:33:00.198510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.338 [2024-11-25 14:33:00.198540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.338 qpair failed and we were unable to recover it. 00:34:55.338 [2024-11-25 14:33:00.198901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.338 [2024-11-25 14:33:00.198931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.338 qpair failed and we were unable to recover it. 00:34:55.338 [2024-11-25 14:33:00.199309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.338 [2024-11-25 14:33:00.199347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.338 qpair failed and we were unable to recover it. 00:34:55.338 [2024-11-25 14:33:00.199684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.338 [2024-11-25 14:33:00.199724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.338 qpair failed and we were unable to recover it. 00:34:55.338 [2024-11-25 14:33:00.200078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.338 [2024-11-25 14:33:00.200108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.338 qpair failed and we were unable to recover it. 00:34:55.338 [2024-11-25 14:33:00.200476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.338 [2024-11-25 14:33:00.200506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.338 qpair failed and we were unable to recover it. 00:34:55.338 [2024-11-25 14:33:00.200764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.338 [2024-11-25 14:33:00.200798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.338 qpair failed and we were unable to recover it. 00:34:55.338 [2024-11-25 14:33:00.201156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.338 [2024-11-25 14:33:00.201197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.338 qpair failed and we were unable to recover it. 00:34:55.338 [2024-11-25 14:33:00.201534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.338 [2024-11-25 14:33:00.201563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.338 qpair failed and we were unable to recover it. 00:34:55.338 [2024-11-25 14:33:00.201912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.338 [2024-11-25 14:33:00.201942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.338 qpair failed and we were unable to recover it. 00:34:55.338 [2024-11-25 14:33:00.202376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.338 [2024-11-25 14:33:00.202407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.338 qpair failed and we were unable to recover it. 00:34:55.338 [2024-11-25 14:33:00.202760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.338 [2024-11-25 14:33:00.202790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.338 qpair failed and we were unable to recover it. 00:34:55.338 [2024-11-25 14:33:00.203168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.338 [2024-11-25 14:33:00.203199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.338 qpair failed and we were unable to recover it. 00:34:55.338 [2024-11-25 14:33:00.203541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.338 [2024-11-25 14:33:00.203571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.338 qpair failed and we were unable to recover it. 00:34:55.338 [2024-11-25 14:33:00.204004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.338 [2024-11-25 14:33:00.204033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.338 qpair failed and we were unable to recover it. 00:34:55.338 [2024-11-25 14:33:00.204324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.338 [2024-11-25 14:33:00.204356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.338 qpair failed and we were unable to recover it. 00:34:55.338 [2024-11-25 14:33:00.204761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.338 [2024-11-25 14:33:00.204790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.338 qpair failed and we were unable to recover it. 00:34:55.338 [2024-11-25 14:33:00.205210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.338 [2024-11-25 14:33:00.205245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.338 qpair failed and we were unable to recover it. 00:34:55.338 [2024-11-25 14:33:00.205484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.338 [2024-11-25 14:33:00.205516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.338 qpair failed and we were unable to recover it. 00:34:55.338 [2024-11-25 14:33:00.205885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.339 [2024-11-25 14:33:00.205916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.339 qpair failed and we were unable to recover it. 00:34:55.339 [2024-11-25 14:33:00.206263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.339 [2024-11-25 14:33:00.206293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.339 qpair failed and we were unable to recover it. 00:34:55.339 [2024-11-25 14:33:00.206658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.339 [2024-11-25 14:33:00.206687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.339 qpair failed and we were unable to recover it. 00:34:55.339 [2024-11-25 14:33:00.207047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.339 [2024-11-25 14:33:00.207077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.339 qpair failed and we were unable to recover it. 00:34:55.339 [2024-11-25 14:33:00.207478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.339 [2024-11-25 14:33:00.207509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.339 qpair failed and we were unable to recover it. 00:34:55.339 [2024-11-25 14:33:00.207856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.339 [2024-11-25 14:33:00.207885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.339 qpair failed and we were unable to recover it. 00:34:55.339 [2024-11-25 14:33:00.208239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.339 [2024-11-25 14:33:00.208270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.339 qpair failed and we were unable to recover it. 00:34:55.339 [2024-11-25 14:33:00.208622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.339 [2024-11-25 14:33:00.208653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.339 qpair failed and we were unable to recover it. 00:34:55.339 [2024-11-25 14:33:00.209062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.339 [2024-11-25 14:33:00.209092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.339 qpair failed and we were unable to recover it. 00:34:55.339 [2024-11-25 14:33:00.209429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.339 [2024-11-25 14:33:00.209459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.339 qpair failed and we were unable to recover it. 00:34:55.339 [2024-11-25 14:33:00.209821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.339 [2024-11-25 14:33:00.209850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.339 qpair failed and we were unable to recover it. 00:34:55.339 [2024-11-25 14:33:00.210219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.339 [2024-11-25 14:33:00.210250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.339 qpair failed and we were unable to recover it. 00:34:55.339 [2024-11-25 14:33:00.210644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.339 [2024-11-25 14:33:00.210674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.339 qpair failed and we were unable to recover it. 00:34:55.339 [2024-11-25 14:33:00.211031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.339 [2024-11-25 14:33:00.211060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.339 qpair failed and we were unable to recover it. 00:34:55.339 [2024-11-25 14:33:00.211315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.339 [2024-11-25 14:33:00.211349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.339 qpair failed and we were unable to recover it. 00:34:55.339 [2024-11-25 14:33:00.211713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.339 [2024-11-25 14:33:00.211742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.339 qpair failed and we were unable to recover it. 00:34:55.339 [2024-11-25 14:33:00.212098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.339 [2024-11-25 14:33:00.212127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.339 qpair failed and we were unable to recover it. 00:34:55.339 [2024-11-25 14:33:00.212474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.339 [2024-11-25 14:33:00.212506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.339 qpair failed and we were unable to recover it. 00:34:55.339 [2024-11-25 14:33:00.212813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.339 [2024-11-25 14:33:00.212843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.339 qpair failed and we were unable to recover it. 00:34:55.339 [2024-11-25 14:33:00.213183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.339 [2024-11-25 14:33:00.213215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.339 qpair failed and we were unable to recover it. 00:34:55.339 [2024-11-25 14:33:00.213554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.339 [2024-11-25 14:33:00.213584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.339 qpair failed and we were unable to recover it. 00:34:55.339 [2024-11-25 14:33:00.213940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.339 [2024-11-25 14:33:00.213969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.339 qpair failed and we were unable to recover it. 00:34:55.339 [2024-11-25 14:33:00.214345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.339 [2024-11-25 14:33:00.214378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.339 qpair failed and we were unable to recover it. 00:34:55.340 [2024-11-25 14:33:00.214745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.340 [2024-11-25 14:33:00.214774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.340 qpair failed and we were unable to recover it. 00:34:55.340 [2024-11-25 14:33:00.215032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.340 [2024-11-25 14:33:00.215064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.340 qpair failed and we were unable to recover it. 00:34:55.340 [2024-11-25 14:33:00.215418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.340 [2024-11-25 14:33:00.215450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.340 qpair failed and we were unable to recover it. 00:34:55.340 [2024-11-25 14:33:00.215807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.340 [2024-11-25 14:33:00.215836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.340 qpair failed and we were unable to recover it. 00:34:55.340 [2024-11-25 14:33:00.216210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.340 [2024-11-25 14:33:00.216240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.340 qpair failed and we were unable to recover it. 00:34:55.340 [2024-11-25 14:33:00.216638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.340 [2024-11-25 14:33:00.216668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.340 qpair failed and we were unable to recover it. 00:34:55.340 [2024-11-25 14:33:00.217016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.340 [2024-11-25 14:33:00.217045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.340 qpair failed and we were unable to recover it. 00:34:55.340 [2024-11-25 14:33:00.217433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.340 [2024-11-25 14:33:00.217464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.340 qpair failed and we were unable to recover it. 00:34:55.340 [2024-11-25 14:33:00.217826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.340 [2024-11-25 14:33:00.217857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.340 qpair failed and we were unable to recover it. 00:34:55.340 [2024-11-25 14:33:00.218110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.340 [2024-11-25 14:33:00.218141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.340 qpair failed and we were unable to recover it. 00:34:55.340 [2024-11-25 14:33:00.218499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.340 [2024-11-25 14:33:00.218530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.340 qpair failed and we were unable to recover it. 00:34:55.340 [2024-11-25 14:33:00.218891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.340 [2024-11-25 14:33:00.218921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.340 qpair failed and we were unable to recover it. 00:34:55.340 [2024-11-25 14:33:00.219295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.340 [2024-11-25 14:33:00.219326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.340 qpair failed and we were unable to recover it. 00:34:55.340 [2024-11-25 14:33:00.219689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.340 [2024-11-25 14:33:00.219719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.340 qpair failed and we were unable to recover it. 00:34:55.340 [2024-11-25 14:33:00.220079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.340 [2024-11-25 14:33:00.220110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.340 qpair failed and we were unable to recover it. 00:34:55.340 [2024-11-25 14:33:00.220478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.340 [2024-11-25 14:33:00.220510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.340 qpair failed and we were unable to recover it. 00:34:55.340 [2024-11-25 14:33:00.220872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.340 [2024-11-25 14:33:00.220901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.340 qpair failed and we were unable to recover it. 00:34:55.340 [2024-11-25 14:33:00.221255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.340 [2024-11-25 14:33:00.221287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.340 qpair failed and we were unable to recover it. 00:34:55.340 [2024-11-25 14:33:00.221621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.340 [2024-11-25 14:33:00.221651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.340 qpair failed and we were unable to recover it. 00:34:55.340 [2024-11-25 14:33:00.222024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.341 [2024-11-25 14:33:00.222054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.341 qpair failed and we were unable to recover it. 00:34:55.341 [2024-11-25 14:33:00.222409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.341 [2024-11-25 14:33:00.222441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.341 qpair failed and we were unable to recover it. 00:34:55.341 [2024-11-25 14:33:00.222772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.341 [2024-11-25 14:33:00.222802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.341 qpair failed and we were unable to recover it. 00:34:55.341 [2024-11-25 14:33:00.223148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.341 [2024-11-25 14:33:00.223190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.341 qpair failed and we were unable to recover it. 00:34:55.341 [2024-11-25 14:33:00.223546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.341 [2024-11-25 14:33:00.223576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.341 qpair failed and we were unable to recover it. 00:34:55.341 [2024-11-25 14:33:00.223946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.341 [2024-11-25 14:33:00.223976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.341 qpair failed and we were unable to recover it. 00:34:55.341 [2024-11-25 14:33:00.224340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.341 [2024-11-25 14:33:00.224373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.341 qpair failed and we were unable to recover it. 00:34:55.341 [2024-11-25 14:33:00.224736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.341 [2024-11-25 14:33:00.224765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.341 qpair failed and we were unable to recover it. 00:34:55.341 [2024-11-25 14:33:00.225127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.341 [2024-11-25 14:33:00.225155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.341 qpair failed and we were unable to recover it. 00:34:55.341 [2024-11-25 14:33:00.225558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.341 [2024-11-25 14:33:00.225594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.341 qpair failed and we were unable to recover it. 00:34:55.341 [2024-11-25 14:33:00.225934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.341 [2024-11-25 14:33:00.225965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.341 qpair failed and we were unable to recover it. 00:34:55.341 [2024-11-25 14:33:00.226273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.341 [2024-11-25 14:33:00.226304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.341 qpair failed and we were unable to recover it. 00:34:55.341 [2024-11-25 14:33:00.226648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.341 [2024-11-25 14:33:00.226679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.341 qpair failed and we were unable to recover it. 00:34:55.341 [2024-11-25 14:33:00.227048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.341 [2024-11-25 14:33:00.227079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.341 qpair failed and we were unable to recover it. 00:34:55.341 [2024-11-25 14:33:00.227442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.341 [2024-11-25 14:33:00.227474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:55.341 qpair failed and we were unable to recover it. 00:34:55.341 Read completed with error (sct=0, sc=8) 00:34:55.341 starting I/O failed 00:34:55.341 Read completed with error (sct=0, sc=8) 00:34:55.341 starting I/O failed 00:34:55.341 Read completed with error (sct=0, sc=8) 00:34:55.341 starting I/O failed 00:34:55.341 Read completed with error (sct=0, sc=8) 00:34:55.341 starting I/O failed 00:34:55.341 Read completed with error (sct=0, sc=8) 00:34:55.341 starting I/O failed 00:34:55.341 Read completed with error (sct=0, sc=8) 00:34:55.341 starting I/O failed 00:34:55.341 Read completed with error (sct=0, sc=8) 00:34:55.341 starting I/O failed 00:34:55.341 Write completed with error (sct=0, sc=8) 00:34:55.341 starting I/O failed 00:34:55.341 Write completed with error (sct=0, sc=8) 00:34:55.341 starting I/O failed 00:34:55.341 Write completed with error (sct=0, sc=8) 00:34:55.341 starting I/O failed 00:34:55.341 Write completed with error (sct=0, sc=8) 00:34:55.341 starting I/O failed 00:34:55.341 Read completed with error (sct=0, sc=8) 00:34:55.341 starting I/O failed 00:34:55.341 Write completed with error (sct=0, sc=8) 00:34:55.341 starting I/O failed 00:34:55.341 Write completed with error (sct=0, sc=8) 00:34:55.341 starting I/O failed 00:34:55.341 Read completed with error (sct=0, sc=8) 00:34:55.341 starting I/O failed 00:34:55.341 Write completed with error (sct=0, sc=8) 00:34:55.341 starting I/O failed 00:34:55.341 Read completed with error (sct=0, sc=8) 00:34:55.341 starting I/O failed 00:34:55.341 Read completed with error (sct=0, sc=8) 00:34:55.341 starting I/O failed 00:34:55.341 Read completed with error (sct=0, sc=8) 00:34:55.341 starting I/O failed 00:34:55.341 Read completed with error (sct=0, sc=8) 00:34:55.341 starting I/O failed 00:34:55.341 Write completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Write completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Write completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Write completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 [2024-11-25 14:33:00.228259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Write completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Write completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Write completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Write completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Write completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Write completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Write completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Write completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Write completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Write completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Write completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 Read completed with error (sct=0, sc=8) 00:34:55.342 starting I/O failed 00:34:55.342 [2024-11-25 14:33:00.229044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:34:55.342 [2024-11-25 14:33:00.229610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.342 [2024-11-25 14:33:00.229732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.342 qpair failed and we were unable to recover it. 00:34:55.342 [2024-11-25 14:33:00.230194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.342 [2024-11-25 14:33:00.230234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.342 qpair failed and we were unable to recover it. 00:34:55.342 [2024-11-25 14:33:00.230686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.342 [2024-11-25 14:33:00.230791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.342 qpair failed and we were unable to recover it. 00:34:55.342 [2024-11-25 14:33:00.231254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.342 [2024-11-25 14:33:00.231322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.342 qpair failed and we were unable to recover it. 00:34:55.342 [2024-11-25 14:33:00.231683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.342 [2024-11-25 14:33:00.231716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.342 qpair failed and we were unable to recover it. 00:34:55.342 [2024-11-25 14:33:00.232051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.343 [2024-11-25 14:33:00.232082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.343 qpair failed and we were unable to recover it. 00:34:55.343 [2024-11-25 14:33:00.232346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.343 [2024-11-25 14:33:00.232382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.343 qpair failed and we were unable to recover it. 00:34:55.343 [2024-11-25 14:33:00.232719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.343 [2024-11-25 14:33:00.232771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.343 qpair failed and we were unable to recover it. 00:34:55.343 [2024-11-25 14:33:00.233100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.343 [2024-11-25 14:33:00.233131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.343 qpair failed and we were unable to recover it. 00:34:55.343 [2024-11-25 14:33:00.233414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.343 [2024-11-25 14:33:00.233449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.343 qpair failed and we were unable to recover it. 00:34:55.343 [2024-11-25 14:33:00.233808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.343 [2024-11-25 14:33:00.233839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.343 qpair failed and we were unable to recover it. 00:34:55.343 [2024-11-25 14:33:00.234177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.343 [2024-11-25 14:33:00.234208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.343 qpair failed and we were unable to recover it. 00:34:55.343 [2024-11-25 14:33:00.234452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.343 [2024-11-25 14:33:00.234481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.343 qpair failed and we were unable to recover it. 00:34:55.343 [2024-11-25 14:33:00.234825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.343 [2024-11-25 14:33:00.234855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.343 qpair failed and we were unable to recover it. 00:34:55.343 [2024-11-25 14:33:00.235235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.343 [2024-11-25 14:33:00.235266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.343 qpair failed and we were unable to recover it. 00:34:55.343 [2024-11-25 14:33:00.235637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.343 [2024-11-25 14:33:00.235666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.343 qpair failed and we were unable to recover it. 00:34:55.343 [2024-11-25 14:33:00.236030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.343 [2024-11-25 14:33:00.236059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.343 qpair failed and we were unable to recover it. 00:34:55.343 [2024-11-25 14:33:00.236393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.343 [2024-11-25 14:33:00.236425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.343 qpair failed and we were unable to recover it. 00:34:55.343 [2024-11-25 14:33:00.236806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.343 [2024-11-25 14:33:00.236835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.343 qpair failed and we were unable to recover it. 00:34:55.343 [2024-11-25 14:33:00.237210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.343 [2024-11-25 14:33:00.237241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.343 qpair failed and we were unable to recover it. 00:34:55.343 [2024-11-25 14:33:00.237615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.343 [2024-11-25 14:33:00.237645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.343 qpair failed and we were unable to recover it. 00:34:55.343 [2024-11-25 14:33:00.237998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.343 [2024-11-25 14:33:00.238028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.343 qpair failed and we were unable to recover it. 00:34:55.343 [2024-11-25 14:33:00.238352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.343 [2024-11-25 14:33:00.238383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.343 qpair failed and we were unable to recover it. 00:34:55.343 [2024-11-25 14:33:00.238783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.343 [2024-11-25 14:33:00.238813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.343 qpair failed and we were unable to recover it. 00:34:55.343 [2024-11-25 14:33:00.239184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.343 [2024-11-25 14:33:00.239216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.343 qpair failed and we were unable to recover it. 00:34:55.343 [2024-11-25 14:33:00.239577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.343 [2024-11-25 14:33:00.239609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.343 qpair failed and we were unable to recover it. 00:34:55.343 [2024-11-25 14:33:00.239963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.343 [2024-11-25 14:33:00.239994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.343 qpair failed and we were unable to recover it. 00:34:55.344 [2024-11-25 14:33:00.240345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.344 [2024-11-25 14:33:00.240375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.344 qpair failed and we were unable to recover it. 00:34:55.344 [2024-11-25 14:33:00.240709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.344 [2024-11-25 14:33:00.240738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.344 qpair failed and we were unable to recover it. 00:34:55.344 [2024-11-25 14:33:00.241097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.344 [2024-11-25 14:33:00.241127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.344 qpair failed and we were unable to recover it. 00:34:55.344 [2024-11-25 14:33:00.241520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.344 [2024-11-25 14:33:00.241552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.344 qpair failed and we were unable to recover it. 00:34:55.344 [2024-11-25 14:33:00.241935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.344 [2024-11-25 14:33:00.241965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.344 qpair failed and we were unable to recover it. 00:34:55.344 [2024-11-25 14:33:00.242344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.344 [2024-11-25 14:33:00.242375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.344 qpair failed and we were unable to recover it. 00:34:55.344 [2024-11-25 14:33:00.242614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.344 [2024-11-25 14:33:00.242646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.344 qpair failed and we were unable to recover it. 00:34:55.344 [2024-11-25 14:33:00.243047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.344 [2024-11-25 14:33:00.243078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.344 qpair failed and we were unable to recover it. 00:34:55.344 [2024-11-25 14:33:00.243422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.344 [2024-11-25 14:33:00.243452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.344 qpair failed and we were unable to recover it. 00:34:55.344 [2024-11-25 14:33:00.243802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.344 [2024-11-25 14:33:00.243832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.344 qpair failed and we were unable to recover it. 00:34:55.344 [2024-11-25 14:33:00.244223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.344 [2024-11-25 14:33:00.244254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.344 qpair failed and we were unable to recover it. 00:34:55.344 [2024-11-25 14:33:00.244529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.344 [2024-11-25 14:33:00.244558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.344 qpair failed and we were unable to recover it. 00:34:55.344 [2024-11-25 14:33:00.244930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.344 [2024-11-25 14:33:00.244960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.344 qpair failed and we were unable to recover it. 00:34:55.344 [2024-11-25 14:33:00.245292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.344 [2024-11-25 14:33:00.245323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.344 qpair failed and we were unable to recover it. 00:34:55.344 [2024-11-25 14:33:00.245690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.344 [2024-11-25 14:33:00.245719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.344 qpair failed and we were unable to recover it. 00:34:55.344 [2024-11-25 14:33:00.246063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.344 [2024-11-25 14:33:00.246092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.344 qpair failed and we were unable to recover it. 00:34:55.344 [2024-11-25 14:33:00.246445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.344 [2024-11-25 14:33:00.246477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.344 qpair failed and we were unable to recover it. 00:34:55.344 [2024-11-25 14:33:00.246720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.344 [2024-11-25 14:33:00.246750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.344 qpair failed and we were unable to recover it. 00:34:55.344 [2024-11-25 14:33:00.247089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.344 [2024-11-25 14:33:00.247119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.344 qpair failed and we were unable to recover it. 00:34:55.344 [2024-11-25 14:33:00.247494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.344 [2024-11-25 14:33:00.247526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.344 qpair failed and we were unable to recover it. 00:34:55.344 [2024-11-25 14:33:00.247936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.344 [2024-11-25 14:33:00.247974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.345 qpair failed and we were unable to recover it. 00:34:55.345 [2024-11-25 14:33:00.248342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.345 [2024-11-25 14:33:00.248374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.345 qpair failed and we were unable to recover it. 00:34:55.345 [2024-11-25 14:33:00.248718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.345 [2024-11-25 14:33:00.248749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.345 qpair failed and we were unable to recover it. 00:34:55.345 [2024-11-25 14:33:00.249112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.345 [2024-11-25 14:33:00.249141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.345 qpair failed and we were unable to recover it. 00:34:55.345 [2024-11-25 14:33:00.249576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.345 [2024-11-25 14:33:00.249606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.345 qpair failed and we were unable to recover it. 00:34:55.345 [2024-11-25 14:33:00.249975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.345 [2024-11-25 14:33:00.250005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.345 qpair failed and we were unable to recover it. 00:34:55.345 [2024-11-25 14:33:00.250388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.345 [2024-11-25 14:33:00.250422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.345 qpair failed and we were unable to recover it. 00:34:55.345 [2024-11-25 14:33:00.250647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.345 [2024-11-25 14:33:00.250676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.345 qpair failed and we were unable to recover it. 00:34:55.345 [2024-11-25 14:33:00.251012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.345 [2024-11-25 14:33:00.251042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.345 qpair failed and we were unable to recover it. 00:34:55.345 [2024-11-25 14:33:00.251395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.345 [2024-11-25 14:33:00.251432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.345 qpair failed and we were unable to recover it. 00:34:55.345 [2024-11-25 14:33:00.251802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.345 [2024-11-25 14:33:00.251831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.345 qpair failed and we were unable to recover it. 00:34:55.345 [2024-11-25 14:33:00.252198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.345 [2024-11-25 14:33:00.252231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.345 qpair failed and we were unable to recover it. 00:34:55.345 [2024-11-25 14:33:00.252558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.345 [2024-11-25 14:33:00.252588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.345 qpair failed and we were unable to recover it. 00:34:55.345 [2024-11-25 14:33:00.252916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.345 [2024-11-25 14:33:00.252945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.345 qpair failed and we were unable to recover it. 00:34:55.345 [2024-11-25 14:33:00.253309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.345 [2024-11-25 14:33:00.253340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.345 qpair failed and we were unable to recover it. 00:34:55.345 [2024-11-25 14:33:00.253679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.345 [2024-11-25 14:33:00.253709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.345 qpair failed and we were unable to recover it. 00:34:55.345 [2024-11-25 14:33:00.254039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.345 [2024-11-25 14:33:00.254068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.345 qpair failed and we were unable to recover it. 00:34:55.345 [2024-11-25 14:33:00.254463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.345 [2024-11-25 14:33:00.254494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.345 qpair failed and we were unable to recover it. 00:34:55.346 [2024-11-25 14:33:00.254852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.346 [2024-11-25 14:33:00.254881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.346 qpair failed and we were unable to recover it. 00:34:55.346 [2024-11-25 14:33:00.255199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.346 [2024-11-25 14:33:00.255229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.346 qpair failed and we were unable to recover it. 00:34:55.346 [2024-11-25 14:33:00.255500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.346 [2024-11-25 14:33:00.255529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.346 qpair failed and we were unable to recover it. 00:34:55.346 [2024-11-25 14:33:00.255895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.346 [2024-11-25 14:33:00.255925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.346 qpair failed and we were unable to recover it. 00:34:55.346 [2024-11-25 14:33:00.256273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.346 [2024-11-25 14:33:00.256304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.346 qpair failed and we were unable to recover it. 00:34:55.346 [2024-11-25 14:33:00.256650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.346 [2024-11-25 14:33:00.256679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.346 qpair failed and we were unable to recover it. 00:34:55.346 [2024-11-25 14:33:00.257059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.346 [2024-11-25 14:33:00.257088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.346 qpair failed and we were unable to recover it. 00:34:55.346 [2024-11-25 14:33:00.257450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.346 [2024-11-25 14:33:00.257480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.346 qpair failed and we were unable to recover it. 00:34:55.346 [2024-11-25 14:33:00.257873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.346 [2024-11-25 14:33:00.257902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.346 qpair failed and we were unable to recover it. 00:34:55.346 [2024-11-25 14:33:00.258173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.346 [2024-11-25 14:33:00.258204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.346 qpair failed and we were unable to recover it. 00:34:55.346 [2024-11-25 14:33:00.258583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.346 [2024-11-25 14:33:00.258613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.346 qpair failed and we were unable to recover it. 00:34:55.346 [2024-11-25 14:33:00.258992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.346 [2024-11-25 14:33:00.259022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.346 qpair failed and we were unable to recover it. 00:34:55.346 [2024-11-25 14:33:00.259254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.346 [2024-11-25 14:33:00.259287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.346 qpair failed and we were unable to recover it. 00:34:55.346 [2024-11-25 14:33:00.259687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.346 [2024-11-25 14:33:00.259717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.346 qpair failed and we were unable to recover it. 00:34:55.346 [2024-11-25 14:33:00.260091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.346 [2024-11-25 14:33:00.260124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.346 qpair failed and we were unable to recover it. 00:34:55.346 [2024-11-25 14:33:00.260300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.346 [2024-11-25 14:33:00.260334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.346 qpair failed and we were unable to recover it. 00:34:55.346 [2024-11-25 14:33:00.260753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.346 [2024-11-25 14:33:00.260783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.346 qpair failed and we were unable to recover it. 00:34:55.347 [2024-11-25 14:33:00.261140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.347 [2024-11-25 14:33:00.261186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.347 qpair failed and we were unable to recover it. 00:34:55.347 [2024-11-25 14:33:00.261521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.347 [2024-11-25 14:33:00.261552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.347 qpair failed and we were unable to recover it. 00:34:55.347 [2024-11-25 14:33:00.261957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.347 [2024-11-25 14:33:00.261989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.347 qpair failed and we were unable to recover it. 00:34:55.347 [2024-11-25 14:33:00.262349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.347 [2024-11-25 14:33:00.262381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.347 qpair failed and we were unable to recover it. 00:34:55.347 [2024-11-25 14:33:00.262651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.347 [2024-11-25 14:33:00.262680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.347 qpair failed and we were unable to recover it. 00:34:55.347 [2024-11-25 14:33:00.262999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.347 [2024-11-25 14:33:00.263037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.347 qpair failed and we were unable to recover it. 00:34:55.347 [2024-11-25 14:33:00.263394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.347 [2024-11-25 14:33:00.263426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.347 qpair failed and we were unable to recover it. 00:34:55.347 [2024-11-25 14:33:00.263768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.347 [2024-11-25 14:33:00.263798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.347 qpair failed and we were unable to recover it. 00:34:55.347 [2024-11-25 14:33:00.264082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.347 [2024-11-25 14:33:00.264111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.347 qpair failed and we were unable to recover it. 00:34:55.347 [2024-11-25 14:33:00.264444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.347 [2024-11-25 14:33:00.264476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.347 qpair failed and we were unable to recover it. 00:34:55.347 [2024-11-25 14:33:00.264845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.347 [2024-11-25 14:33:00.264876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.347 qpair failed and we were unable to recover it. 00:34:55.347 [2024-11-25 14:33:00.265114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.347 [2024-11-25 14:33:00.265148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.347 qpair failed and we were unable to recover it. 00:34:55.347 [2024-11-25 14:33:00.265534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.347 [2024-11-25 14:33:00.265565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.347 qpair failed and we were unable to recover it. 00:34:55.347 [2024-11-25 14:33:00.265940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.347 [2024-11-25 14:33:00.265970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.347 qpair failed and we were unable to recover it. 00:34:55.347 [2024-11-25 14:33:00.266294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.347 [2024-11-25 14:33:00.266327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.347 qpair failed and we were unable to recover it. 00:34:55.347 [2024-11-25 14:33:00.266707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.347 [2024-11-25 14:33:00.266738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.347 qpair failed and we were unable to recover it. 00:34:55.347 [2024-11-25 14:33:00.267105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.347 [2024-11-25 14:33:00.267140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.347 qpair failed and we were unable to recover it. 00:34:55.347 [2024-11-25 14:33:00.267416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.347 [2024-11-25 14:33:00.267447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.347 qpair failed and we were unable to recover it. 00:34:55.347 [2024-11-25 14:33:00.267808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.347 [2024-11-25 14:33:00.267839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.347 qpair failed and we were unable to recover it. 00:34:55.347 [2024-11-25 14:33:00.268105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.347 [2024-11-25 14:33:00.268137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.347 qpair failed and we were unable to recover it. 00:34:55.347 [2024-11-25 14:33:00.268546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.347 [2024-11-25 14:33:00.268577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.347 qpair failed and we were unable to recover it. 00:34:55.347 [2024-11-25 14:33:00.268897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.347 [2024-11-25 14:33:00.268930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.347 qpair failed and we were unable to recover it. 00:34:55.348 [2024-11-25 14:33:00.269263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.348 [2024-11-25 14:33:00.269295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.348 qpair failed and we were unable to recover it. 00:34:55.348 [2024-11-25 14:33:00.269678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.348 [2024-11-25 14:33:00.269708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.348 qpair failed and we were unable to recover it. 00:34:55.348 [2024-11-25 14:33:00.270062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.348 [2024-11-25 14:33:00.270092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.348 qpair failed and we were unable to recover it. 00:34:55.348 [2024-11-25 14:33:00.270481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.348 [2024-11-25 14:33:00.270512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.348 qpair failed and we were unable to recover it. 00:34:55.348 [2024-11-25 14:33:00.270743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.348 [2024-11-25 14:33:00.270775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.348 qpair failed and we were unable to recover it. 00:34:55.348 [2024-11-25 14:33:00.271043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.348 [2024-11-25 14:33:00.271074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.348 qpair failed and we were unable to recover it. 00:34:55.348 [2024-11-25 14:33:00.271419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.348 [2024-11-25 14:33:00.271451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.348 qpair failed and we were unable to recover it. 00:34:55.348 [2024-11-25 14:33:00.271813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.348 [2024-11-25 14:33:00.271843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.348 qpair failed and we were unable to recover it. 00:34:55.348 [2024-11-25 14:33:00.272221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.348 [2024-11-25 14:33:00.272253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.348 qpair failed and we were unable to recover it. 00:34:55.348 [2024-11-25 14:33:00.272622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.348 [2024-11-25 14:33:00.272653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.348 qpair failed and we were unable to recover it. 00:34:55.348 [2024-11-25 14:33:00.272999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.348 [2024-11-25 14:33:00.273030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.348 qpair failed and we were unable to recover it. 00:34:55.348 [2024-11-25 14:33:00.273386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.348 [2024-11-25 14:33:00.273418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.348 qpair failed and we were unable to recover it. 00:34:55.348 [2024-11-25 14:33:00.273755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.348 [2024-11-25 14:33:00.273786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.348 qpair failed and we were unable to recover it. 00:34:55.348 [2024-11-25 14:33:00.274145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.348 [2024-11-25 14:33:00.274187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.348 qpair failed and we were unable to recover it. 00:34:55.348 [2024-11-25 14:33:00.274555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.348 [2024-11-25 14:33:00.274585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.348 qpair failed and we were unable to recover it. 00:34:55.348 [2024-11-25 14:33:00.274954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.348 [2024-11-25 14:33:00.274985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.348 qpair failed and we were unable to recover it. 00:34:55.348 [2024-11-25 14:33:00.275333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.348 [2024-11-25 14:33:00.275365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.348 qpair failed and we were unable to recover it. 00:34:55.348 [2024-11-25 14:33:00.275731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.348 [2024-11-25 14:33:00.275760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.348 qpair failed and we were unable to recover it. 00:34:55.348 [2024-11-25 14:33:00.276010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.348 [2024-11-25 14:33:00.276040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.348 qpair failed and we were unable to recover it. 00:34:55.348 [2024-11-25 14:33:00.276291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.348 [2024-11-25 14:33:00.276323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.348 qpair failed and we were unable to recover it. 00:34:55.348 [2024-11-25 14:33:00.276713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.348 [2024-11-25 14:33:00.276744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.348 qpair failed and we were unable to recover it. 00:34:55.348 [2024-11-25 14:33:00.277102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.348 [2024-11-25 14:33:00.277136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.348 qpair failed and we were unable to recover it. 00:34:55.349 [2024-11-25 14:33:00.277529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.349 [2024-11-25 14:33:00.277559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.349 qpair failed and we were unable to recover it. 00:34:55.349 [2024-11-25 14:33:00.277929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.349 [2024-11-25 14:33:00.277966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.349 qpair failed and we were unable to recover it. 00:34:55.349 [2024-11-25 14:33:00.278289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.349 [2024-11-25 14:33:00.278321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.349 qpair failed and we were unable to recover it. 00:34:55.349 [2024-11-25 14:33:00.278644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.349 [2024-11-25 14:33:00.278673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.349 qpair failed and we were unable to recover it. 00:34:55.349 [2024-11-25 14:33:00.278931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.349 [2024-11-25 14:33:00.278963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.349 qpair failed and we were unable to recover it. 00:34:55.349 [2024-11-25 14:33:00.279315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.349 [2024-11-25 14:33:00.279347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.349 qpair failed and we were unable to recover it. 00:34:55.349 [2024-11-25 14:33:00.279574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.349 [2024-11-25 14:33:00.279605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.349 qpair failed and we were unable to recover it. 00:34:55.349 [2024-11-25 14:33:00.279951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.349 [2024-11-25 14:33:00.279980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.349 qpair failed and we were unable to recover it. 00:34:55.349 [2024-11-25 14:33:00.280340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.349 [2024-11-25 14:33:00.280372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.349 qpair failed and we were unable to recover it. 00:34:55.349 [2024-11-25 14:33:00.280711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.349 [2024-11-25 14:33:00.280742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.349 qpair failed and we were unable to recover it. 00:34:55.349 [2024-11-25 14:33:00.281104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.349 [2024-11-25 14:33:00.281134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.349 qpair failed and we were unable to recover it. 00:34:55.349 [2024-11-25 14:33:00.281556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.349 [2024-11-25 14:33:00.281587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.349 qpair failed and we were unable to recover it. 00:34:55.349 [2024-11-25 14:33:00.281945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.349 [2024-11-25 14:33:00.281975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.349 qpair failed and we were unable to recover it. 00:34:55.349 [2024-11-25 14:33:00.282297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.349 [2024-11-25 14:33:00.282329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.349 qpair failed and we were unable to recover it. 00:34:55.349 [2024-11-25 14:33:00.282698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.349 [2024-11-25 14:33:00.282728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.349 qpair failed and we were unable to recover it. 00:34:55.349 [2024-11-25 14:33:00.283078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.349 [2024-11-25 14:33:00.283109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.349 qpair failed and we were unable to recover it. 00:34:55.349 [2024-11-25 14:33:00.283470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.349 [2024-11-25 14:33:00.283502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.349 qpair failed and we were unable to recover it. 00:34:55.349 [2024-11-25 14:33:00.283847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.349 [2024-11-25 14:33:00.283878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.349 qpair failed and we were unable to recover it. 00:34:55.349 [2024-11-25 14:33:00.284245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.349 [2024-11-25 14:33:00.284276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.349 qpair failed and we were unable to recover it. 00:34:55.349 [2024-11-25 14:33:00.284642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.349 [2024-11-25 14:33:00.284673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.349 qpair failed and we were unable to recover it. 00:34:55.349 [2024-11-25 14:33:00.285024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.349 [2024-11-25 14:33:00.285055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.349 qpair failed and we were unable to recover it. 00:34:55.349 [2024-11-25 14:33:00.285422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.349 [2024-11-25 14:33:00.285453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.350 qpair failed and we were unable to recover it. 00:34:55.350 [2024-11-25 14:33:00.285677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.350 [2024-11-25 14:33:00.285707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.350 qpair failed and we were unable to recover it. 00:34:55.350 [2024-11-25 14:33:00.286054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.350 [2024-11-25 14:33:00.286085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.350 qpair failed and we were unable to recover it. 00:34:55.350 [2024-11-25 14:33:00.286413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.350 [2024-11-25 14:33:00.286444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.350 qpair failed and we were unable to recover it. 00:34:55.350 [2024-11-25 14:33:00.286761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.350 [2024-11-25 14:33:00.286791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.350 qpair failed and we were unable to recover it. 00:34:55.350 [2024-11-25 14:33:00.287156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.350 [2024-11-25 14:33:00.287195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.350 qpair failed and we were unable to recover it. 00:34:55.350 [2024-11-25 14:33:00.287517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.350 [2024-11-25 14:33:00.287547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.350 qpair failed and we were unable to recover it. 00:34:55.350 [2024-11-25 14:33:00.287911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.350 [2024-11-25 14:33:00.287942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.350 qpair failed and we were unable to recover it. 00:34:55.350 [2024-11-25 14:33:00.288186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.350 [2024-11-25 14:33:00.288218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.350 qpair failed and we were unable to recover it. 00:34:55.350 [2024-11-25 14:33:00.288620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.350 [2024-11-25 14:33:00.288650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.350 qpair failed and we were unable to recover it. 00:34:55.350 [2024-11-25 14:33:00.289007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.350 [2024-11-25 14:33:00.289037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.350 qpair failed and we were unable to recover it. 00:34:55.350 [2024-11-25 14:33:00.289388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.350 [2024-11-25 14:33:00.289420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.350 qpair failed and we were unable to recover it. 00:34:55.350 [2024-11-25 14:33:00.289643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.350 [2024-11-25 14:33:00.289673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.350 qpair failed and we were unable to recover it. 00:34:55.350 [2024-11-25 14:33:00.290043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.350 [2024-11-25 14:33:00.290074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.350 qpair failed and we were unable to recover it. 00:34:55.350 [2024-11-25 14:33:00.290433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.350 [2024-11-25 14:33:00.290464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.350 qpair failed and we were unable to recover it. 00:34:55.350 [2024-11-25 14:33:00.290750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.350 [2024-11-25 14:33:00.290781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.350 qpair failed and we were unable to recover it. 00:34:55.350 [2024-11-25 14:33:00.291120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.350 [2024-11-25 14:33:00.291149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.350 qpair failed and we were unable to recover it. 00:34:55.350 [2024-11-25 14:33:00.291526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.350 [2024-11-25 14:33:00.291568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.350 qpair failed and we were unable to recover it. 00:34:55.350 [2024-11-25 14:33:00.291941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.350 [2024-11-25 14:33:00.291972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.350 qpair failed and we were unable to recover it. 00:34:55.350 [2024-11-25 14:33:00.292225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.351 [2024-11-25 14:33:00.292256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.351 qpair failed and we were unable to recover it. 00:34:55.351 [2024-11-25 14:33:00.292621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.351 [2024-11-25 14:33:00.292656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.351 qpair failed and we were unable to recover it. 00:34:55.351 [2024-11-25 14:33:00.293033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.351 [2024-11-25 14:33:00.293063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.351 qpair failed and we were unable to recover it. 00:34:55.351 [2024-11-25 14:33:00.293445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.351 [2024-11-25 14:33:00.293477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.351 qpair failed and we were unable to recover it. 00:34:55.351 [2024-11-25 14:33:00.293839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.351 [2024-11-25 14:33:00.293869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.351 qpair failed and we were unable to recover it. 00:34:55.351 [2024-11-25 14:33:00.294238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.351 [2024-11-25 14:33:00.294270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.351 qpair failed and we were unable to recover it. 00:34:55.351 [2024-11-25 14:33:00.294653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.351 [2024-11-25 14:33:00.294682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.351 qpair failed and we were unable to recover it. 00:34:55.351 [2024-11-25 14:33:00.295002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.351 [2024-11-25 14:33:00.295032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.351 qpair failed and we were unable to recover it. 00:34:55.351 [2024-11-25 14:33:00.295399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.351 [2024-11-25 14:33:00.295429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.351 qpair failed and we were unable to recover it. 00:34:55.351 [2024-11-25 14:33:00.295765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.351 [2024-11-25 14:33:00.295795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.351 qpair failed and we were unable to recover it. 00:34:55.351 [2024-11-25 14:33:00.296140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.351 [2024-11-25 14:33:00.296182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.351 qpair failed and we were unable to recover it. 00:34:55.351 [2024-11-25 14:33:00.296424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.351 [2024-11-25 14:33:00.296456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.351 qpair failed and we were unable to recover it. 00:34:55.351 [2024-11-25 14:33:00.296798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.351 [2024-11-25 14:33:00.296827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.351 qpair failed and we were unable to recover it. 00:34:55.351 [2024-11-25 14:33:00.297188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.351 [2024-11-25 14:33:00.297219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.351 qpair failed and we were unable to recover it. 00:34:55.351 [2024-11-25 14:33:00.297583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.351 [2024-11-25 14:33:00.297612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.351 qpair failed and we were unable to recover it. 00:34:55.351 [2024-11-25 14:33:00.297981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.351 [2024-11-25 14:33:00.298011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.351 qpair failed and we were unable to recover it. 00:34:55.351 [2024-11-25 14:33:00.298259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.351 [2024-11-25 14:33:00.298289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.351 qpair failed and we were unable to recover it. 00:34:55.351 [2024-11-25 14:33:00.298430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.351 [2024-11-25 14:33:00.298459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.351 qpair failed and we were unable to recover it. 00:34:55.351 [2024-11-25 14:33:00.298836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.351 [2024-11-25 14:33:00.298865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.351 qpair failed and we were unable to recover it. 00:34:55.351 [2024-11-25 14:33:00.299232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.351 [2024-11-25 14:33:00.299263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.351 qpair failed and we were unable to recover it. 00:34:55.351 [2024-11-25 14:33:00.299636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.351 [2024-11-25 14:33:00.299665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.351 qpair failed and we were unable to recover it. 00:34:55.352 [2024-11-25 14:33:00.300029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.352 [2024-11-25 14:33:00.300060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.352 qpair failed and we were unable to recover it. 00:34:55.352 [2024-11-25 14:33:00.300319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.352 [2024-11-25 14:33:00.300349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.352 qpair failed and we were unable to recover it. 00:34:55.352 [2024-11-25 14:33:00.300729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.352 [2024-11-25 14:33:00.300760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.352 qpair failed and we were unable to recover it. 00:34:55.352 [2024-11-25 14:33:00.301129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.352 [2024-11-25 14:33:00.301170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.352 qpair failed and we were unable to recover it. 00:34:55.352 [2024-11-25 14:33:00.301520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.352 [2024-11-25 14:33:00.301552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.352 qpair failed and we were unable to recover it. 00:34:55.352 [2024-11-25 14:33:00.301910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.352 [2024-11-25 14:33:00.301942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.352 qpair failed and we were unable to recover it. 00:34:55.352 [2024-11-25 14:33:00.302195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.352 [2024-11-25 14:33:00.302226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.352 qpair failed and we were unable to recover it. 00:34:55.352 [2024-11-25 14:33:00.302498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.352 [2024-11-25 14:33:00.302530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.352 qpair failed and we were unable to recover it. 00:34:55.352 [2024-11-25 14:33:00.302912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.352 [2024-11-25 14:33:00.302943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.352 qpair failed and we were unable to recover it. 00:34:55.352 [2024-11-25 14:33:00.303181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.352 [2024-11-25 14:33:00.303211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.352 qpair failed and we were unable to recover it. 00:34:55.352 [2024-11-25 14:33:00.303529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.352 [2024-11-25 14:33:00.303558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.352 qpair failed and we were unable to recover it. 00:34:55.352 [2024-11-25 14:33:00.303936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.352 [2024-11-25 14:33:00.303966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.352 qpair failed and we were unable to recover it. 00:34:55.352 [2024-11-25 14:33:00.304206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.352 [2024-11-25 14:33:00.304236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.352 qpair failed and we were unable to recover it. 00:34:55.352 [2024-11-25 14:33:00.304575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.352 [2024-11-25 14:33:00.304605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.352 qpair failed and we were unable to recover it. 00:34:55.352 [2024-11-25 14:33:00.304869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.352 [2024-11-25 14:33:00.304898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.352 qpair failed and we were unable to recover it. 00:34:55.352 [2024-11-25 14:33:00.305251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.352 [2024-11-25 14:33:00.305283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.352 qpair failed and we were unable to recover it. 00:34:55.352 [2024-11-25 14:33:00.305625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.352 [2024-11-25 14:33:00.305655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.352 qpair failed and we were unable to recover it. 00:34:55.352 [2024-11-25 14:33:00.305987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.352 [2024-11-25 14:33:00.306017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.352 qpair failed and we were unable to recover it. 00:34:55.352 [2024-11-25 14:33:00.306383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.352 [2024-11-25 14:33:00.306415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.352 qpair failed and we were unable to recover it. 00:34:55.352 [2024-11-25 14:33:00.306778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.352 [2024-11-25 14:33:00.306806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.352 qpair failed and we were unable to recover it. 00:34:55.352 [2024-11-25 14:33:00.307169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.353 [2024-11-25 14:33:00.307213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.353 qpair failed and we were unable to recover it. 00:34:55.353 [2024-11-25 14:33:00.307557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.353 [2024-11-25 14:33:00.307588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.353 qpair failed and we were unable to recover it. 00:34:55.353 [2024-11-25 14:33:00.307959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.353 [2024-11-25 14:33:00.307988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.353 qpair failed and we were unable to recover it. 00:34:55.353 [2024-11-25 14:33:00.308257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.353 [2024-11-25 14:33:00.308287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.353 qpair failed and we were unable to recover it. 00:34:55.353 [2024-11-25 14:33:00.308631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.353 [2024-11-25 14:33:00.308660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.353 qpair failed and we were unable to recover it. 00:34:55.353 [2024-11-25 14:33:00.308879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.353 [2024-11-25 14:33:00.308910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.353 qpair failed and we were unable to recover it. 00:34:55.353 [2024-11-25 14:33:00.309301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.353 [2024-11-25 14:33:00.309331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.353 qpair failed and we were unable to recover it. 00:34:55.353 [2024-11-25 14:33:00.309589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.353 [2024-11-25 14:33:00.309618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.353 qpair failed and we were unable to recover it. 00:34:55.353 [2024-11-25 14:33:00.309990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.353 [2024-11-25 14:33:00.310018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.353 qpair failed and we were unable to recover it. 00:34:55.353 [2024-11-25 14:33:00.310283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.353 [2024-11-25 14:33:00.310317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.353 qpair failed and we were unable to recover it. 00:34:55.353 [2024-11-25 14:33:00.310638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.353 [2024-11-25 14:33:00.310668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.353 qpair failed and we were unable to recover it. 00:34:55.353 [2024-11-25 14:33:00.311102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.353 [2024-11-25 14:33:00.311131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.353 qpair failed and we were unable to recover it. 00:34:55.353 [2024-11-25 14:33:00.311497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.353 [2024-11-25 14:33:00.311528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.353 qpair failed and we were unable to recover it. 00:34:55.353 [2024-11-25 14:33:00.311892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.353 [2024-11-25 14:33:00.311921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.353 qpair failed and we were unable to recover it. 00:34:55.353 [2024-11-25 14:33:00.312154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.353 [2024-11-25 14:33:00.312195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.353 qpair failed and we were unable to recover it. 00:34:55.353 [2024-11-25 14:33:00.312589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.353 [2024-11-25 14:33:00.312617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.353 qpair failed and we were unable to recover it. 00:34:55.353 [2024-11-25 14:33:00.312912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.353 [2024-11-25 14:33:00.312940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.353 qpair failed and we were unable to recover it. 00:34:55.353 [2024-11-25 14:33:00.313336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.353 [2024-11-25 14:33:00.313367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.353 qpair failed and we were unable to recover it. 00:34:55.353 [2024-11-25 14:33:00.313722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.353 [2024-11-25 14:33:00.313753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.353 qpair failed and we were unable to recover it. 00:34:55.353 [2024-11-25 14:33:00.314131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.354 [2024-11-25 14:33:00.314172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.354 qpair failed and we were unable to recover it. 00:34:55.354 [2024-11-25 14:33:00.314524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.354 [2024-11-25 14:33:00.314554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.354 qpair failed and we were unable to recover it. 00:34:55.354 [2024-11-25 14:33:00.314914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.354 [2024-11-25 14:33:00.314942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.354 qpair failed and we were unable to recover it. 00:34:55.354 [2024-11-25 14:33:00.315238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.354 [2024-11-25 14:33:00.315272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.354 qpair failed and we were unable to recover it. 00:34:55.354 [2024-11-25 14:33:00.315618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.354 [2024-11-25 14:33:00.315646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.354 qpair failed and we were unable to recover it. 00:34:55.354 [2024-11-25 14:33:00.316007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.354 [2024-11-25 14:33:00.316035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.354 qpair failed and we were unable to recover it. 00:34:55.354 [2024-11-25 14:33:00.316265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.354 [2024-11-25 14:33:00.316294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.354 qpair failed and we were unable to recover it. 00:34:55.354 [2024-11-25 14:33:00.316583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.354 [2024-11-25 14:33:00.316611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.354 qpair failed and we were unable to recover it. 00:34:55.354 [2024-11-25 14:33:00.316993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.354 [2024-11-25 14:33:00.317022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.354 qpair failed and we were unable to recover it. 00:34:55.354 [2024-11-25 14:33:00.317351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.354 [2024-11-25 14:33:00.317383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.354 qpair failed and we were unable to recover it. 00:34:55.354 [2024-11-25 14:33:00.317733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.354 [2024-11-25 14:33:00.317763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.354 qpair failed and we were unable to recover it. 00:34:55.354 [2024-11-25 14:33:00.318013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.354 [2024-11-25 14:33:00.318041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.354 qpair failed and we were unable to recover it. 00:34:55.354 [2024-11-25 14:33:00.318286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.354 [2024-11-25 14:33:00.318318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.354 qpair failed and we were unable to recover it. 00:34:55.354 [2024-11-25 14:33:00.318664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.354 [2024-11-25 14:33:00.318693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.354 qpair failed and we were unable to recover it. 00:34:55.354 [2024-11-25 14:33:00.319019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.354 [2024-11-25 14:33:00.319047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.354 qpair failed and we were unable to recover it. 00:34:55.354 [2024-11-25 14:33:00.319435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.354 [2024-11-25 14:33:00.319466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.354 qpair failed and we were unable to recover it. 00:34:55.354 [2024-11-25 14:33:00.319807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.354 [2024-11-25 14:33:00.319835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.354 qpair failed and we were unable to recover it. 00:34:55.354 [2024-11-25 14:33:00.320196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.354 [2024-11-25 14:33:00.320225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.354 qpair failed and we were unable to recover it. 00:34:55.354 [2024-11-25 14:33:00.320577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.354 [2024-11-25 14:33:00.320607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.354 qpair failed and we were unable to recover it. 00:34:55.354 [2024-11-25 14:33:00.320831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.355 [2024-11-25 14:33:00.320860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.355 qpair failed and we were unable to recover it. 00:34:55.355 [2024-11-25 14:33:00.321151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.355 [2024-11-25 14:33:00.321202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.355 qpair failed and we were unable to recover it. 00:34:55.355 [2024-11-25 14:33:00.321467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.355 [2024-11-25 14:33:00.321497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.355 qpair failed and we were unable to recover it. 00:34:55.355 [2024-11-25 14:33:00.321745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.355 [2024-11-25 14:33:00.321774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.355 qpair failed and we were unable to recover it. 00:34:55.355 [2024-11-25 14:33:00.322135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.355 [2024-11-25 14:33:00.322174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.355 qpair failed and we were unable to recover it. 00:34:55.355 [2024-11-25 14:33:00.322566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.355 [2024-11-25 14:33:00.322595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.355 qpair failed and we were unable to recover it. 00:34:55.355 [2024-11-25 14:33:00.322933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.355 [2024-11-25 14:33:00.322963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.355 qpair failed and we were unable to recover it. 00:34:55.355 [2024-11-25 14:33:00.323328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.355 [2024-11-25 14:33:00.323358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.355 qpair failed and we were unable to recover it. 00:34:55.355 [2024-11-25 14:33:00.323708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.355 [2024-11-25 14:33:00.323736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.355 qpair failed and we were unable to recover it. 00:34:55.355 [2024-11-25 14:33:00.324104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.355 [2024-11-25 14:33:00.324133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.355 qpair failed and we were unable to recover it. 00:34:55.355 [2024-11-25 14:33:00.324501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.355 [2024-11-25 14:33:00.324531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.355 qpair failed and we were unable to recover it. 00:34:55.355 [2024-11-25 14:33:00.324877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.355 [2024-11-25 14:33:00.324907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.355 qpair failed and we were unable to recover it. 00:34:55.355 [2024-11-25 14:33:00.325294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.355 [2024-11-25 14:33:00.325324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.355 qpair failed and we were unable to recover it. 00:34:55.355 [2024-11-25 14:33:00.325688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.325716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.326070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.326098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.326451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.326482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.326852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.326881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.327241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.327273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.327615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.327644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.327902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.327933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.328297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.328327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.328717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.328747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.329003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.329034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.329375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.329405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.329781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.329809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.330166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.330197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.330525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.330554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.330897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.330926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.331275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.331305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.331657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.331693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.332046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.332075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.332457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.332487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.332788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.332816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.333202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.333233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.333593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.333622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.333875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.333906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.334189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.334220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.334584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.334612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.335010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.335038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.335277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.335307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.335647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.335676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.336027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.336057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.336412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.336442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.336782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.336812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.337153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.337205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.337550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.337579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.356 [2024-11-25 14:33:00.337943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.356 [2024-11-25 14:33:00.337972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.356 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.338342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.338372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.338632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.338661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.339019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.339048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.339286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.339321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.339669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.339700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.340073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.340102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.340429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.340460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.340752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.340781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.341147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.341207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.341436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.341465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.341736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.341764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.342021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.342050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.342421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.342450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.342795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.342824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.343183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.343213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.343600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.343628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.343856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.343887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.344238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.344268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.344517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.344548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.344912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.344941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.345279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.345308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.345663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.345691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.346059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.346094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.346512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.346542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.346890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.346920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.347277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.347308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.347668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.347696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.347945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.347975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.348329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.348360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.348729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.348759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.349016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.349047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.349419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.349449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.349715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.349744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.349993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.350022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.350405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.350436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.350653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.350684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.351086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.351116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.357 qpair failed and we were unable to recover it. 00:34:55.357 [2024-11-25 14:33:00.351477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.357 [2024-11-25 14:33:00.351509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.351859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.351889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.352285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.352317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.352643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.352674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.353035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.353065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.353449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.353479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.353842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.353873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.354197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.354228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.354493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.354522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.354898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.354927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.355146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.355186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.355525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.355555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.355917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.355946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.356295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.356325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.356587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.356616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.356950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.356979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.357362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.357392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.357786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.357815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.358149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.358188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.358541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.358571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.358920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.358949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.359197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.359227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.359616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.359646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.359999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.360030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.360391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.360421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.360789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.360825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.361070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.361100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.361381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.361411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.361743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.361773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.362027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.362057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.362308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.362338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.362714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.362744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.363023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.363055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.363305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.363336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.363664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.363694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.364059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.364089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.364361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.364393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.364772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.364802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.358 [2024-11-25 14:33:00.365051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.358 [2024-11-25 14:33:00.365081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.358 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.365460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.365491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.365849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.365878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.366095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.366127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.366554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.366585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.366951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.366980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.367347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.367379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.367726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.367756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.368081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.368111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.368453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.368486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.368830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.368859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.369244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.369274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.369625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.369655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.370016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.370045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.370457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.370487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.370820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.370849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.371204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.371235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.371621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.371650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.372004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.372033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.372311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.372342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.372598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.372629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.372991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.373021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.373457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.373488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.373826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.373858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.374092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.374126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.374439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.374480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.374845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.374875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.375250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.375291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.375660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.375692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.376049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.376079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.376460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.376496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.376879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.376910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.377177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.377209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.377579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.377608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.377971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.378004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.378442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.378474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.378829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.378863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.379233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.359 [2024-11-25 14:33:00.379267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.359 qpair failed and we were unable to recover it. 00:34:55.359 [2024-11-25 14:33:00.379610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.379642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.380076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.380108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.380549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.380580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.380985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.381016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.381404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.381435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.381800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.381831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.382136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.382178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.382537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.382568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.382929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.382958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.383323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.383353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.383783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.383815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.384156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.384193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.384558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.384587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.384947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.384977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.385337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.385368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.385628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.385657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.386010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.386041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.386439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.386470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.386841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.386869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.387218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.387249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.387651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.387679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.388028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.388058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.388410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.388441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.388819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.388848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.389191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.389221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.389627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.389656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.390015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.390044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.390412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.390442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.390812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.390841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.391155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.391203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.391566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.391596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.391966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.360 [2024-11-25 14:33:00.391995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.360 qpair failed and we were unable to recover it. 00:34:55.360 [2024-11-25 14:33:00.392370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.392401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.392748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.392777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.393040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.393069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.393286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.393320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.393676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.393713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.394079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.394108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.394447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.394480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.394838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.394869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.395231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.395262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.395621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.395651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.395889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.395922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.396296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.396328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.396680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.396711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.397082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.397112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.397515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.397545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.397896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.397926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.398280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.398313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.398678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.398707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.399066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.399095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.399467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.399498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.399861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.399891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.400141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.400184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.400568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.400598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.400966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.400996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.401370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.401401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.401782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.401811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.402174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.402206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.361 [2024-11-25 14:33:00.402599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.361 [2024-11-25 14:33:00.402629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.361 qpair failed and we were unable to recover it. 00:34:55.640 [2024-11-25 14:33:00.402968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.640 [2024-11-25 14:33:00.403000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.640 qpair failed and we were unable to recover it. 00:34:55.640 [2024-11-25 14:33:00.403258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.640 [2024-11-25 14:33:00.403291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.640 qpair failed and we were unable to recover it. 00:34:55.640 [2024-11-25 14:33:00.403665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.640 [2024-11-25 14:33:00.403695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.640 qpair failed and we were unable to recover it. 00:34:55.640 [2024-11-25 14:33:00.404058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.640 [2024-11-25 14:33:00.404087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.640 qpair failed and we were unable to recover it. 00:34:55.640 [2024-11-25 14:33:00.404466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.640 [2024-11-25 14:33:00.404497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.640 qpair failed and we were unable to recover it. 00:34:55.640 [2024-11-25 14:33:00.404859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.640 [2024-11-25 14:33:00.404889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.640 qpair failed and we were unable to recover it. 00:34:55.640 [2024-11-25 14:33:00.405230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.405261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.405620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.405649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.406011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.406043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.406412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.406450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.406806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.406836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.407202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.407233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.407593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.407623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.407995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.408024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.408396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.408425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.408794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.408823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.409199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.409230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.409619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.409647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.409991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.410019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.410385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.410415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.410667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.410696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.411057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.411087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.411437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.411467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.411830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.411860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.412097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.412129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.412503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.412532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.412899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.412928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.413288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.413319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.413697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.413726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.414083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.414111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.414466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.414496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.414864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.414893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.415260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.415291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.415675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.415703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.416055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.416084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.416443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.416474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.416737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.416769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.417116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.417146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.417511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.417541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.417900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.417928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.418284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.418315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.418664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.418693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.641 qpair failed and we were unable to recover it. 00:34:55.641 [2024-11-25 14:33:00.418942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.641 [2024-11-25 14:33:00.418970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.419315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.419345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.419713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.419742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.420104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.420132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.420516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.420545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.420899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.420929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.421277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.421308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.421582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.421617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.421967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.421996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.422329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.422361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.422713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.422742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.423194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.423223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.423559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.423589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.423936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.423965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.424216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.424247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.424639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.424668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.424918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.424947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.425250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.425281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.425676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.425706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.426061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.426091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.426466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.426497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.426734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.426763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.427124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.427153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.427508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.427538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.427902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.427931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.428279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.428310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.428537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.428569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.428944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.428974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.429346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.429376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.429748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.429777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.430138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.642 [2024-11-25 14:33:00.430179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.642 qpair failed and we were unable to recover it. 00:34:55.642 [2024-11-25 14:33:00.430542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.430571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.430935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.430963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.431326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.431356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.431701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.431731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.432091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.432121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.432396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.432427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.432775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.432804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.433181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.433210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.433583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.433611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.434013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.434043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.434403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.434435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.434785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.434814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.435183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.435214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.435544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.435573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.435939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.435969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.436332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.436362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.436738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.436773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.437114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.437144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.437486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.437515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.437884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.437913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.438277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.438307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.438650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.438679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.438914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.438944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.439323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.439354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.439724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.439754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.440119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.440148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.440407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.440436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.440770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.440800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.441171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.441201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.441621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.441649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.441895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.441927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.442287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.442318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.442664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.442694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.443034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.443063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.443451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.443481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.443849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.443879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.444238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.643 [2024-11-25 14:33:00.444268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.643 qpair failed and we were unable to recover it. 00:34:55.643 [2024-11-25 14:33:00.444647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.444676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.445058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.445087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.445528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.445558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.445888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.445918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.446287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.446317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.446659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.446688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.447051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.447081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.447433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.447464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.447712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.447741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.448069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.448097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.448482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.448514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.448848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.448878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.449260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.449290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.449664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.449694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.450044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.450074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.450425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.450455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.450855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.450883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.451127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.451155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.451559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.451589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.451958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.451994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.452361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.452391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.452631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.452663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.453019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.453048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.453450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.453480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.453831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.453861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.454232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.454262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.454607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.454636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.454995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.455024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.455424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.455454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.455805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.455835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.456048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.456079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.456456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.456487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.456749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.456778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.644 qpair failed and we were unable to recover it. 00:34:55.644 [2024-11-25 14:33:00.457154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.644 [2024-11-25 14:33:00.457194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.457464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.457492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.457914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.457943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.458276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.458308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.458554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.458583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.458935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.458963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.459228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.459258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.459610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.459640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.460003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.460031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.460394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.460425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.460786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.460816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.461178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.461209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.461605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.461633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.462011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.462040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.462286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.462319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.462566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.462595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.462935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.462972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.463195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.463226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.463623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.463652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.464028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.464057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.464460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.464490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.464840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.464868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.465278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.465308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.465663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.465692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.466058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.466086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.466446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.466477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.466832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.466869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.467245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.467276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.645 [2024-11-25 14:33:00.467670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.645 [2024-11-25 14:33:00.467698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.645 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.468041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.468071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.468413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.468443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.468683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.468715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.469080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.469111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.469451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.469482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.469725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.469756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.470131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.470175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.470562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.470591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.470953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.470983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.471342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.471373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.471786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.471815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.472181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.472213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.472585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.472614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.472982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.473011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.473357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.473387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.473754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.473782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.474143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.474181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.474539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.474567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.474945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.474973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.475419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.475448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.475804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.475834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.476195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.476225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.476581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.476611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.476976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.477005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.477241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.477272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.477660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.477689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.478056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.478085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.478322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.478355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.478729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.478758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.479118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.479147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.479510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.479541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.479791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.479824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.480182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.480213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.480462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.480491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.480731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.480762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.481147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.481187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.646 qpair failed and we were unable to recover it. 00:34:55.646 [2024-11-25 14:33:00.481589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.646 [2024-11-25 14:33:00.481618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.482042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.482077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.482425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.482458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.482711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.482740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.483096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.483124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.483466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.483498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.483836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.483867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.484237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.484267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.484617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.484647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.485018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.485047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.485397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.485429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.485795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.485826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.486188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.486218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.486583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.486611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.486973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.487002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.487381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.487412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.487689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.487717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.488080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.488109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.488344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.488377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.488748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.488777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.489140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.489178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.489533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.489562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.489937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.489966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.490218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.490248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.490622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.490650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.491017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.491047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.491416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.491448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.491804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.491833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.492195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.492226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.492579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.492607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.492973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.493002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.493394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.493424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.493686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.493714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.493975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.494005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.494340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.494371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.494741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.494771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.495127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.495166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.495535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.495564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.647 [2024-11-25 14:33:00.495941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.647 [2024-11-25 14:33:00.495971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.647 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.496340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.496371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.496713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.496742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.497103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.497138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.497583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.497612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.497863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.497892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.498234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.498265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.498579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.498608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.498947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.498976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.499321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.499352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.499590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.499619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.499984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.500014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.500387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.500417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.500807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.500836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.501197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.501228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.501625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.501654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.501899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.501927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.502257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.502288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.502648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.502676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.502962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.502992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.503337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.503368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.503607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.503637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.503997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.504027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.504411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.504442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.504821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.504851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.505225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.505269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.505598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.505629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.505869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.505901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.506270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.506301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.506677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.506707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.506946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.506978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.507334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.507364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.507739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.507769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.508131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.508169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.508588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.508618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.508988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.509017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.509428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.509460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.509818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.648 [2024-11-25 14:33:00.509849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.648 qpair failed and we were unable to recover it. 00:34:55.648 [2024-11-25 14:33:00.510205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.510237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.510492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.510523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.510871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.510901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.511254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.511285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.511684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.511714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.512068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.512097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.512497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.512528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.512882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.512912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.513281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.513313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.513677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.513708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.513948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.513978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.514227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.514258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.514628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.514658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.515023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.515053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.515412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.515443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.515789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.515819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.516185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.516216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.516450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.516482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.516850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.516880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.517229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.517260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.517637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.517667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.517816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.517848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.518219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.518251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.518626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.518656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.519016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.519045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.519394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.519426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.519805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.519835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.520211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.520242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.520611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.520640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.520994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.521024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.521390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.521421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.521784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.521813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.522201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.522239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.522515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.522545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.522799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.522827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.523200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.523231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.523457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.523486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.523945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.649 [2024-11-25 14:33:00.523975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.649 qpair failed and we were unable to recover it. 00:34:55.649 [2024-11-25 14:33:00.524325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.524356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.524627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.524655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.525003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.525032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.525437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.525468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.525717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.525745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.526103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.526132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.526504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.526535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.526884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.526915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.527218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.527252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.527602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.527632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.527960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.527990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.528361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.528391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.528753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.528782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.529180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.529211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.529598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.529629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.529997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.530029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.530362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.530394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.530766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.530795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.531051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.531083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.531310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.531342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.531689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.531719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.531968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.532002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.532413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.532442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.532685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.532714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.533092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.533122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.533550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.533581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.533944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.533974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.534211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.534241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.534623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.534653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.535012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.535041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.535420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.535451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.535792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.535822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.536081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.536110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.650 qpair failed and we were unable to recover it. 00:34:55.650 [2024-11-25 14:33:00.536412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.650 [2024-11-25 14:33:00.536442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.536804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.536841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.537240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.537271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.537503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.537536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.537907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.537936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.538221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.538251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.538684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.538714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.538956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.538987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.539241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.539276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.539651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.539681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.540114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.540144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.540519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.540550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.540923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.540952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.541130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.541179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.541455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.541486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.541866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.541896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.542263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.542294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.542555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.542583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.542938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.542968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.543225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.543259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.543634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.543664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.544036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.544065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.544480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.544510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.544868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.544899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.545272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.545303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.545554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.545583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.545988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.546017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.546397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.546428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.546802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.546832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.547187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.547220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.547598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.547628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.548043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.548072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.548420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.548450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.548812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.548841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.549203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.549233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.549603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.549632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.549854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.549885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.550249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.550280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.651 [2024-11-25 14:33:00.550669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.651 [2024-11-25 14:33:00.550700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.651 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.551070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.551099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.551371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.551401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.551753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.551789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.552151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.552212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.552566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.552596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.552958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.552988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.553332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.553363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.553732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.553760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.554135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.554173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.554521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.554551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.554924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.554953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.555317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.555349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.555713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.555742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.556124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.556153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.556526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.556556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.556919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.556949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.557299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.557332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.557752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.557782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.558150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.558188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.558572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.558602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.558840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.558869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.559241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.559272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.559512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.559544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.559799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.559828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.560201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.560232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.560583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.560613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.560978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.561007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.561352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.561382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.561635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.561664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.562044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.562075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.562451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.562481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.562877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.562907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.563273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.563304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.563678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.563707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.564082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.564111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.564516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.564546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.564913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.564943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.565308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.652 [2024-11-25 14:33:00.565338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.652 qpair failed and we were unable to recover it. 00:34:55.652 [2024-11-25 14:33:00.565604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.565633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.566005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.566035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.566411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.566452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.566812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.566842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.567220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.567258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.567627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.567656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.568018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.568048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.568407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.568437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.568801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.568830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.569197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.569227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.569627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.569656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.570005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.570034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.570406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.570436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.570808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.570837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.571204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.571234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.571605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.571635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.572007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.572036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.572289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.572318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.572694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.572724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.573089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.573117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.573496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.573527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.573890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.573919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.574285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.574315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.574674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.574703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.575043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.575073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.575407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.575438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.575807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.575835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.576198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.576228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.576592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.576620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.577012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.577041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.577403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.577434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.577760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.577789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.578152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.578191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.578534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.578563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.578901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.578930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.579284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.579314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.579670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.579699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.579998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.653 [2024-11-25 14:33:00.580026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.653 qpair failed and we were unable to recover it. 00:34:55.653 [2024-11-25 14:33:00.580393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.580422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.580783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.580811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.581192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.581223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.581577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.581606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.581871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.581898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.582251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.582281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.582674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.582709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.583089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.583117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.583481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.583510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.583882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.583912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.584275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.584305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.584669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.584697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.585050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.585079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.585364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.585395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.585755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.585784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.586147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.586187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.586511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.586539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.586909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.586938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.587298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.587328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.587678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.587707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.588082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.588111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.588486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.588517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.588911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.588940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.589274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.589306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.589628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.589656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.590001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.590028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.590388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.590418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.590781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.590809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.591192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.591222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.591566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.591595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.591957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.591985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.592348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.592377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.592740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.592768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.593137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.593176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.593539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.593569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.593924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.654 [2024-11-25 14:33:00.593953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.654 qpair failed and we were unable to recover it. 00:34:55.654 [2024-11-25 14:33:00.594214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.594243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.594607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.594636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.594965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.594994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.595346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.595376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.595737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.595766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.596127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.596156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.596504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.596533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.596896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.596925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.597204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.597233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.597534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.597563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.597911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.597944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.598280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.598311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.598555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.598588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.598936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.598964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.599324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.599354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.599726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.599754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.600116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.600145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.600515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.600544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.600915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.600943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.601298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.601328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.601690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.601718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.602079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.602108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.602461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.602491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.602917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.602945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.603275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.603305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.603696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.603725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.604086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.604115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.604414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.604444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.604790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.604820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.605181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.605212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.605581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.605610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.605825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.605855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.606219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.606249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.606630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.606660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.607026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.607055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.655 qpair failed and we were unable to recover it. 00:34:55.655 [2024-11-25 14:33:00.607318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.655 [2024-11-25 14:33:00.607350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.607622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.607653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.608030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.608060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.608460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.608491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.608826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.608854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.609227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.609258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.609637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.609665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.610042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.610072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.610428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.610459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.610690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.610722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.611087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.611116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.611464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.611496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.611842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.611872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.612210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.612241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.612683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.612712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.613068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.613104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.613469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.613500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.613860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.613889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.614233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.614263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.614624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.614653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.615014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.615042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.615315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.615345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.615726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.615755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.616109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.616138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.616514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.616545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.616919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.616948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.617303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.617333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.617690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.617720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.618089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.618119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.618469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.618500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.618864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.618893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.619249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.619280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.619637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.619667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.620023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.620053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.620405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.620436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.620794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.620825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.621072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.621103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.621472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.656 [2024-11-25 14:33:00.621503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.656 qpair failed and we were unable to recover it. 00:34:55.656 [2024-11-25 14:33:00.621863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.621892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.622235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.622266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.622633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.622664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.622996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.623025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.623400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.623430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.623797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.623828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.624197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.624227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.624593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.624622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.624989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.625018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.625391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.625422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.625781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.625810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.626179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.626210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.626467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.626499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.626841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.626870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.627215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.627246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.627656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.627686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.628042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.628071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.628425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.628463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.628821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.628851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.629224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.629255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.629602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.629631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.629995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.630025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.630392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.630423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.630797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.630827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.631061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.631093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.631283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.631314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.631739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.631768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.632120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.632149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.632514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.632546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.632910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.632939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.633304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.633339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.633692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.633730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.634127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.634156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.634519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.634548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.634818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.634847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.635205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.635237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.635605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.635637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.636022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.657 [2024-11-25 14:33:00.636050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.657 qpair failed and we were unable to recover it. 00:34:55.657 [2024-11-25 14:33:00.636289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.636323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.636606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.636636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.637012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.637041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.637390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.637422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.637778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.637806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.638173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.638204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.638567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.638597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.638969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.638999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.639385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.639417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.639779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.639808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.640178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.640208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.640555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.640593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.640949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.640979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.641346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.641377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.641741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.641771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.642005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.642037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.642413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.642444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.642817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.642846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.643214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.643246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.643636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.643671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.644009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.644040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.644422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.644452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.644849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.644878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.645130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.645171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.645553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.645582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.645956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.645985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.646337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.646368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.646812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.646842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.647181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.647213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.647579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.647608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.647969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.647998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.648362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.648394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.648755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.648783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.649147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.649187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.649531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.649559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.658 [2024-11-25 14:33:00.649771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.658 [2024-11-25 14:33:00.649802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.658 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.650152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.650190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.650589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.650618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.650973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.651003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.651387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.651418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.651784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.651812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.652189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.652220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.652554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.652584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.652945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.652973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.653342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.653372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.653619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.653651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.653994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.654024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.654391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.654421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.654784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.654813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.655177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.655209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.655575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.655604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.655964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.655993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.656339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.656370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.656729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.656757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.656999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.657027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.657226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.657256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.657607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.657636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.658009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.658039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.658384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.658414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.658781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.658818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.659180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.659211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.659578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.659608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.659962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.659991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.660354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.660385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.660734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.660763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.661117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.661146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.661519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.661547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.659 [2024-11-25 14:33:00.661920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.659 [2024-11-25 14:33:00.661949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.659 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.662310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.662341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.662702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.662731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.663113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.663142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.663491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.663522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.663881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.663909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.664277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.664308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.664667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.664695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.665071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.665100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.665422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.665453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.665816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.665845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.666212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.666243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.666603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.666632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.667005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.667034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.667388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.667418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.667780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.667809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.668203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.668234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.668577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.668607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.668856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.668887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.669274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.669306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.669677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.669706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.670058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.670088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.670444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.670474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.670832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.670862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.671234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.671263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.671673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.671702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.672030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.672059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.672335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.672364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.672613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.672644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.673010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.673039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.673427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.673457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.673818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.673847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.674215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.674252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.674496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.674530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.674894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.674923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.660 qpair failed and we were unable to recover it. 00:34:55.660 [2024-11-25 14:33:00.675285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.660 [2024-11-25 14:33:00.675316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.675711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.675740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.676105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.676133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.676512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.676542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.676901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.676931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.677279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.677310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.677677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.677705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.678067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.678095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.678442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.678472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.678833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.678862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.679228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.679257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.679619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.679649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.680003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.680031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.680410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.680440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.680808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.680837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.681205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.681235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.681730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.681760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.682119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.682149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.682506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.682535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.682896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.682927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.683287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.683318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.683711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.683739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.684091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.684119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.684380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.684415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.684771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.684801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.685169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.685199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.685532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.685561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.685931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.685961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.686314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.686345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.686720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.686749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.687116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.687145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.687548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.687577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.687826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.687855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.688270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.688301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.688620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.688650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.661 [2024-11-25 14:33:00.689026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.661 [2024-11-25 14:33:00.689056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.661 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.689400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.689431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.689789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.689824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.690189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.690220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.690582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.690610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.690864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.690893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.691234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.691265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.691509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.691541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.691926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.691955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.692324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.692354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.692701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.692731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.693094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.693123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.693487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.693519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.693775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.693807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.694183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.694215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.694454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.694483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.694929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.694958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.695312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.695344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.695729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.695757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.695968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.695999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.696397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.696429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.696789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.696819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.697176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.697207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.697477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.697506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.697857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.697887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.698248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.698279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.698632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.698662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.699035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.699065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.699405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.699437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.699801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.699831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.700196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.700227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.700612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.700640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.701024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.701053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.701384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.701414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.701777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.701806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.702180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.702211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.702534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.702563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.702922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.662 [2024-11-25 14:33:00.702951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.662 qpair failed and we were unable to recover it. 00:34:55.662 [2024-11-25 14:33:00.703312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.663 [2024-11-25 14:33:00.703342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.663 qpair failed and we were unable to recover it. 00:34:55.663 [2024-11-25 14:33:00.703732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.663 [2024-11-25 14:33:00.703762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.663 qpair failed and we were unable to recover it. 00:34:55.663 [2024-11-25 14:33:00.704103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.663 [2024-11-25 14:33:00.704132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.663 qpair failed and we were unable to recover it. 00:34:55.663 [2024-11-25 14:33:00.704558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.663 [2024-11-25 14:33:00.704589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.663 qpair failed and we were unable to recover it. 00:34:55.663 [2024-11-25 14:33:00.704943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.663 [2024-11-25 14:33:00.704979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.663 qpair failed and we were unable to recover it. 00:34:55.663 [2024-11-25 14:33:00.705311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.663 [2024-11-25 14:33:00.705341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.663 qpair failed and we were unable to recover it. 00:34:55.663 [2024-11-25 14:33:00.705613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.663 [2024-11-25 14:33:00.705642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.663 qpair failed and we were unable to recover it. 00:34:55.663 [2024-11-25 14:33:00.706071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.663 [2024-11-25 14:33:00.706101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.663 qpair failed and we were unable to recover it. 00:34:55.663 [2024-11-25 14:33:00.706459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.663 [2024-11-25 14:33:00.706490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.663 qpair failed and we were unable to recover it. 00:34:55.663 [2024-11-25 14:33:00.706847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.663 [2024-11-25 14:33:00.706876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.663 qpair failed and we were unable to recover it. 00:34:55.663 [2024-11-25 14:33:00.707233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.663 [2024-11-25 14:33:00.707264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.663 qpair failed and we were unable to recover it. 00:34:55.663 [2024-11-25 14:33:00.707626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.663 [2024-11-25 14:33:00.707657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.663 qpair failed and we were unable to recover it. 00:34:55.663 [2024-11-25 14:33:00.708024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.663 [2024-11-25 14:33:00.708053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.663 qpair failed and we were unable to recover it. 00:34:55.663 [2024-11-25 14:33:00.708396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.663 [2024-11-25 14:33:00.708429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.663 qpair failed and we were unable to recover it. 00:34:55.663 [2024-11-25 14:33:00.708778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.663 [2024-11-25 14:33:00.708807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.663 qpair failed and we were unable to recover it. 00:34:55.663 [2024-11-25 14:33:00.709175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.663 [2024-11-25 14:33:00.709207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.663 qpair failed and we were unable to recover it. 00:34:55.663 [2024-11-25 14:33:00.709351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.663 [2024-11-25 14:33:00.709382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.663 qpair failed and we were unable to recover it. 00:34:55.663 [2024-11-25 14:33:00.709794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.663 [2024-11-25 14:33:00.709823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.663 qpair failed and we were unable to recover it. 00:34:55.663 [2024-11-25 14:33:00.710194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.663 [2024-11-25 14:33:00.710225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.663 qpair failed and we were unable to recover it. 00:34:55.663 [2024-11-25 14:33:00.710602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.663 [2024-11-25 14:33:00.710631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.663 qpair failed and we were unable to recover it. 00:34:55.663 [2024-11-25 14:33:00.710886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.663 [2024-11-25 14:33:00.710918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.663 qpair failed and we were unable to recover it. 00:34:55.663 [2024-11-25 14:33:00.711273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.663 [2024-11-25 14:33:00.711304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.663 qpair failed and we were unable to recover it. 00:34:55.942 [2024-11-25 14:33:00.711671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.942 [2024-11-25 14:33:00.711703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.942 qpair failed and we were unable to recover it. 00:34:55.942 [2024-11-25 14:33:00.712067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.942 [2024-11-25 14:33:00.712096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.942 qpair failed and we were unable to recover it. 00:34:55.942 [2024-11-25 14:33:00.712469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.942 [2024-11-25 14:33:00.712499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.942 qpair failed and we were unable to recover it. 00:34:55.942 [2024-11-25 14:33:00.712837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.942 [2024-11-25 14:33:00.712868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.942 qpair failed and we were unable to recover it. 00:34:55.942 [2024-11-25 14:33:00.713153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.942 [2024-11-25 14:33:00.713191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.942 qpair failed and we were unable to recover it. 00:34:55.942 [2024-11-25 14:33:00.713585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.942 [2024-11-25 14:33:00.713615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.942 qpair failed and we were unable to recover it. 00:34:55.942 [2024-11-25 14:33:00.713962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.942 [2024-11-25 14:33:00.713993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.942 qpair failed and we were unable to recover it. 00:34:55.942 [2024-11-25 14:33:00.714383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.942 [2024-11-25 14:33:00.714414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.942 qpair failed and we were unable to recover it. 00:34:55.942 [2024-11-25 14:33:00.714783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.942 [2024-11-25 14:33:00.714814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.942 qpair failed and we were unable to recover it. 00:34:55.942 [2024-11-25 14:33:00.715170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.942 [2024-11-25 14:33:00.715208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.942 qpair failed and we were unable to recover it. 00:34:55.942 [2024-11-25 14:33:00.715441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.942 [2024-11-25 14:33:00.715473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.942 qpair failed and we were unable to recover it. 00:34:55.942 [2024-11-25 14:33:00.715847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.942 [2024-11-25 14:33:00.715877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.942 qpair failed and we were unable to recover it. 00:34:55.942 [2024-11-25 14:33:00.716325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.942 [2024-11-25 14:33:00.716356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.942 qpair failed and we were unable to recover it. 00:34:55.942 [2024-11-25 14:33:00.716726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.942 [2024-11-25 14:33:00.716757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.942 qpair failed and we were unable to recover it. 00:34:55.942 [2024-11-25 14:33:00.717098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.942 [2024-11-25 14:33:00.717129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.942 qpair failed and we were unable to recover it. 00:34:55.942 [2024-11-25 14:33:00.717387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.942 [2024-11-25 14:33:00.717420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.942 qpair failed and we were unable to recover it. 00:34:55.942 [2024-11-25 14:33:00.717767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.942 [2024-11-25 14:33:00.717797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.942 qpair failed and we were unable to recover it. 00:34:55.942 [2024-11-25 14:33:00.718151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.942 [2024-11-25 14:33:00.718195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.942 qpair failed and we were unable to recover it. 00:34:55.942 [2024-11-25 14:33:00.718526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.942 [2024-11-25 14:33:00.718555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.942 qpair failed and we were unable to recover it. 00:34:55.942 [2024-11-25 14:33:00.718924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.942 [2024-11-25 14:33:00.718955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.942 qpair failed and we were unable to recover it. 00:34:55.942 [2024-11-25 14:33:00.719320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.942 [2024-11-25 14:33:00.719351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.942 qpair failed and we were unable to recover it. 00:34:55.942 [2024-11-25 14:33:00.719711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.942 [2024-11-25 14:33:00.719740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.942 qpair failed and we were unable to recover it. 00:34:55.942 [2024-11-25 14:33:00.720108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.942 [2024-11-25 14:33:00.720136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.942 qpair failed and we were unable to recover it. 00:34:55.942 [2024-11-25 14:33:00.720410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.942 [2024-11-25 14:33:00.720439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.942 qpair failed and we were unable to recover it. 00:34:55.942 [2024-11-25 14:33:00.720673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.942 [2024-11-25 14:33:00.720710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.721035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.721064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.721415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.721447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.721784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.721813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.722178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.722209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.722566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.722596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.722961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.722991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.723336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.723366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.723723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.723752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.724001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.724033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.724414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.724445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.724788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.724819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.725175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.725206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.725578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.725607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.725970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.725999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.726384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.726416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.726791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.726821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.727187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.727223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.727597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.727625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.727976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.728004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.728385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.728414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.728789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.728817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.729179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.729208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.729555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.729583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.729953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.729982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.730238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.730274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.730672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.730700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.730876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.730905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.731280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.731310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.731668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.943 [2024-11-25 14:33:00.731696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.943 qpair failed and we were unable to recover it. 00:34:55.943 [2024-11-25 14:33:00.732058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.732087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.732342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.732373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.732726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.732755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.733125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.733153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.733511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.733542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.733908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.733937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.734308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.734339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.734709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.734737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.735107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.735137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.735520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.735550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.735921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.735950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.736198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.736230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.736585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.736613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.736979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.737007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.737373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.737403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.737746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.737774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.738144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.738195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.738522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.738550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.738919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.738947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.739314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.739345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.739694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.739722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.739973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.740002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.740367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.740398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.740757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.740785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.741233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.741263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.741632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.741660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.742024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.742052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.742464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.742495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.944 qpair failed and we were unable to recover it. 00:34:55.944 [2024-11-25 14:33:00.742746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.944 [2024-11-25 14:33:00.742774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.743036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.743064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.743445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.743477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.743832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.743860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.744230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.744260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.744634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.744662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.745028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.745057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.745394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.745429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.745719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.745748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.746102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.746132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.746498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.746527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.746889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.746919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.747292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.747323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.747702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.747731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.748095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.748123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.748482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.748512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.748872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.748901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.749271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.749300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.749517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.749547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.749916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.749945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.750232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.750261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.750636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.750666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.751034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.751064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.751428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.751459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.751856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.751885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.752243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.752273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.752636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.752665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.753032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.753062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.753449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.753479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.753835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.945 [2024-11-25 14:33:00.753863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.945 qpair failed and we were unable to recover it. 00:34:55.945 [2024-11-25 14:33:00.754244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.754274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.754500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.754531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.754823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.754853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.755257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.755290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.755635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.755664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.756061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.756090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.756453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.756484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.756847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.756876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.757195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.757224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.757405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.757433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.757815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.757844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.758141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.758204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.758587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.758618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.758977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.759007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.759380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.759411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.759656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.759685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.759911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.759940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.760319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.760360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.760717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.760746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.761103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.761133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.761497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.761527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.761888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.761918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.762300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.762330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.762714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.762742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.763111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.763142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.763520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.763550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.763812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.763840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.764192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.764222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.764572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.764602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.764944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.764972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.946 qpair failed and we were unable to recover it. 00:34:55.946 [2024-11-25 14:33:00.765328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.946 [2024-11-25 14:33:00.765359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.947 [2024-11-25 14:33:00.765728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.947 [2024-11-25 14:33:00.765757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.947 [2024-11-25 14:33:00.766130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.947 [2024-11-25 14:33:00.766182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.947 [2024-11-25 14:33:00.766461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.947 [2024-11-25 14:33:00.766492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.947 [2024-11-25 14:33:00.766862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.947 [2024-11-25 14:33:00.766891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.947 [2024-11-25 14:33:00.767326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.947 [2024-11-25 14:33:00.767357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.947 [2024-11-25 14:33:00.767605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.947 [2024-11-25 14:33:00.767635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.947 [2024-11-25 14:33:00.767988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.947 [2024-11-25 14:33:00.768017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.947 [2024-11-25 14:33:00.768386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.947 [2024-11-25 14:33:00.768418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.947 [2024-11-25 14:33:00.768786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.947 [2024-11-25 14:33:00.768816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.947 [2024-11-25 14:33:00.769190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.947 [2024-11-25 14:33:00.769220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.947 [2024-11-25 14:33:00.769589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.947 [2024-11-25 14:33:00.769619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.947 [2024-11-25 14:33:00.769861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.947 [2024-11-25 14:33:00.769889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.947 [2024-11-25 14:33:00.770271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.947 [2024-11-25 14:33:00.770301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.947 [2024-11-25 14:33:00.770554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.947 [2024-11-25 14:33:00.770583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.947 [2024-11-25 14:33:00.770939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.947 [2024-11-25 14:33:00.770969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.947 [2024-11-25 14:33:00.771337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.947 [2024-11-25 14:33:00.771368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.947 [2024-11-25 14:33:00.771709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.947 [2024-11-25 14:33:00.771745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.947 [2024-11-25 14:33:00.772078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.947 [2024-11-25 14:33:00.772106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.947 [2024-11-25 14:33:00.772449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.947 [2024-11-25 14:33:00.772481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.947 [2024-11-25 14:33:00.772840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.947 [2024-11-25 14:33:00.772868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.947 [2024-11-25 14:33:00.773210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.947 [2024-11-25 14:33:00.773241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.947 [2024-11-25 14:33:00.773604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.947 [2024-11-25 14:33:00.773635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.947 [2024-11-25 14:33:00.773899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.947 [2024-11-25 14:33:00.773928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.947 [2024-11-25 14:33:00.774289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.947 [2024-11-25 14:33:00.774319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.947 [2024-11-25 14:33:00.774580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.947 [2024-11-25 14:33:00.774612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.947 [2024-11-25 14:33:00.774957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.947 [2024-11-25 14:33:00.774988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.947 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.775340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.775380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.775740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.775771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.776124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.776153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.776520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.776550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.776916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.776945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.777310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.777340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.777703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.777733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.777981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.778010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.778341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.778372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.778739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.778770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.779136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.779174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.779539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.779569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.779910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.779941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.780306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.780336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.780698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.780728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.781100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.781128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.781496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.781527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.781888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.781918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.782289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.782320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.782663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.782694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.783051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.783081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.783443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.783474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.783892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.783922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.784275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.784307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.784559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.784589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.784939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.784969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.785227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.948 [2024-11-25 14:33:00.785258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.948 qpair failed and we were unable to recover it. 00:34:55.948 [2024-11-25 14:33:00.785630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.785659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.786027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.786057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.786465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.786497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.786748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.786777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.787126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.787156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.787513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.787543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.787905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.787935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.788277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.788307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.788568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.788598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.788838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.788869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.789317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.789349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.789687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.789717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.790084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.790114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.790566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.790606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.790861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.790889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.791238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.791269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.791493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.791521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.791888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.791918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.792284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.792314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.792684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.792715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.793089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.793119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.793462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.793494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.793757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.793785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.794169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.794201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.794528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.794557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.795013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.795041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.795428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.795459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.795820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.795850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.796210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.796241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.796607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.796638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.949 [2024-11-25 14:33:00.797005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.949 [2024-11-25 14:33:00.797034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.949 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.797288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.797320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.797698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.797729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.798089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.798119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.798463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.798494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.798858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.798889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.799236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.799267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.799632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.799663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.800011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.800041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.800266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.800296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.800687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.800717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.801066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.801096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.801449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.801480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.801845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.801874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.802334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.802364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.802643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.802672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.803065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.803094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.803347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.803378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.803747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.803776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.804026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.804057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.804366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.804400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.804646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.804676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.804914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.804944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.805308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.805344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.805708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.805738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.806100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.806129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.806509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.806540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.806795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.806825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.807183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.807214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.807459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.807488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.807896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.807927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.950 [2024-11-25 14:33:00.808282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.950 [2024-11-25 14:33:00.808315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.950 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.808719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.808749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.809113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.809143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.809399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.809428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.809651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.809680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.809926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.809956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.810317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.810348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.810737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.810768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.811129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.811173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.811535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.811565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.811979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.812009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.812373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.812405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.812768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.812796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.813168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.813200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.813572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.813602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.813834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.813864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.814221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.814251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.814587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.814616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.814989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.815017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.815387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.815418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.815779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.815808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.816179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.816209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.816644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.816672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.817026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.817055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.817323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.817353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.817610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.817638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.817988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.818018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.818415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.818445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.818802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.818831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.819176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.819205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.819567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.819596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.951 [2024-11-25 14:33:00.819957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.951 [2024-11-25 14:33:00.819986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.951 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.820349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.820384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.820720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.820749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.821120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.821149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.821503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.821534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.821909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.821938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.822267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.822298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.822636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.822664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.823024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.823053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.823310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.823339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.823721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.823750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.824111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.824141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.824410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.824439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.824772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.824802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.825040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.825072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.825457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.825487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.825845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.825875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.826253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.826284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.826658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.826687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.826932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.826960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.827236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.827267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.827607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.827636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.827873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.827902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.828323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.828354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.828714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.828742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.829102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.829131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.829511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.829541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.829908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.829937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.830302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.830334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.830682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.830710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.831090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.831118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.831571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.952 [2024-11-25 14:33:00.831601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.952 qpair failed and we were unable to recover it. 00:34:55.952 [2024-11-25 14:33:00.832027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.953 [2024-11-25 14:33:00.832056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.953 qpair failed and we were unable to recover it. 00:34:55.953 [2024-11-25 14:33:00.832292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.953 [2024-11-25 14:33:00.832326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.953 qpair failed and we were unable to recover it. 00:34:55.953 [2024-11-25 14:33:00.832694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.953 [2024-11-25 14:33:00.832723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.953 qpair failed and we were unable to recover it. 00:34:55.953 [2024-11-25 14:33:00.833083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.953 [2024-11-25 14:33:00.833111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.953 qpair failed and we were unable to recover it. 00:34:55.953 [2024-11-25 14:33:00.833483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.953 [2024-11-25 14:33:00.833513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.953 qpair failed and we were unable to recover it. 00:34:55.953 [2024-11-25 14:33:00.833870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.953 [2024-11-25 14:33:00.833898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.953 qpair failed and we were unable to recover it. 00:34:55.953 [2024-11-25 14:33:00.834247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.953 [2024-11-25 14:33:00.834278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.953 qpair failed and we were unable to recover it. 00:34:55.953 [2024-11-25 14:33:00.834647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.953 [2024-11-25 14:33:00.834675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.953 qpair failed and we were unable to recover it. 00:34:55.953 [2024-11-25 14:33:00.835135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.953 [2024-11-25 14:33:00.835171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.953 qpair failed and we were unable to recover it. 00:34:55.953 [2024-11-25 14:33:00.835536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.953 [2024-11-25 14:33:00.835578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.953 qpair failed and we were unable to recover it. 00:34:55.953 [2024-11-25 14:33:00.835928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.953 [2024-11-25 14:33:00.835958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.953 qpair failed and we were unable to recover it. 00:34:55.953 [2024-11-25 14:33:00.836347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.953 [2024-11-25 14:33:00.836377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.953 qpair failed and we were unable to recover it. 00:34:55.953 [2024-11-25 14:33:00.836543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.953 [2024-11-25 14:33:00.836574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.953 qpair failed and we were unable to recover it. 00:34:55.953 [2024-11-25 14:33:00.836959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.953 [2024-11-25 14:33:00.836988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.953 qpair failed and we were unable to recover it. 00:34:55.953 [2024-11-25 14:33:00.837366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.953 [2024-11-25 14:33:00.837396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.953 qpair failed and we were unable to recover it. 00:34:55.953 [2024-11-25 14:33:00.837756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.953 [2024-11-25 14:33:00.837784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.953 qpair failed and we were unable to recover it. 00:34:55.953 [2024-11-25 14:33:00.838171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.953 [2024-11-25 14:33:00.838201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.953 qpair failed and we were unable to recover it. 00:34:55.953 [2024-11-25 14:33:00.838528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.953 [2024-11-25 14:33:00.838558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.953 qpair failed and we were unable to recover it. 00:34:55.953 [2024-11-25 14:33:00.838939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.953 [2024-11-25 14:33:00.838968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.953 qpair failed and we were unable to recover it. 00:34:55.953 [2024-11-25 14:33:00.839327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.953 [2024-11-25 14:33:00.839358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.953 qpair failed and we were unable to recover it. 00:34:55.953 [2024-11-25 14:33:00.839732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.953 [2024-11-25 14:33:00.839760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.953 qpair failed and we were unable to recover it. 00:34:55.953 [2024-11-25 14:33:00.840130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.953 [2024-11-25 14:33:00.840173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.953 qpair failed and we were unable to recover it. 00:34:55.953 [2024-11-25 14:33:00.840596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.953 [2024-11-25 14:33:00.840625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.953 qpair failed and we were unable to recover it. 00:34:55.953 [2024-11-25 14:33:00.840966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.953 [2024-11-25 14:33:00.840995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.953 qpair failed and we were unable to recover it. 00:34:55.953 [2024-11-25 14:33:00.841382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.953 [2024-11-25 14:33:00.841412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.953 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.841774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.841802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.842148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.842184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.842525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.842554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.842915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.842944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.843309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.843340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.843703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.843732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.844084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.844114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.844460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.844491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.844841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.844870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.845237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.845267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.845672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.845700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.845936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.845965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.846218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.846247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.846622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.846652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.847012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.847041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.847416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.847445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.847811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.847840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.848205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.848235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.848515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.848542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.848894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.848922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.849284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.849315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.849650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.849678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.850049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.850079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.850438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.850469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.850759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.850794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.851127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.851168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.851529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.851558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.851901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.851933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.954 qpair failed and we were unable to recover it. 00:34:55.954 [2024-11-25 14:33:00.852289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.954 [2024-11-25 14:33:00.852321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.852559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.852591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.852959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.852987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.853245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.853274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.853510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.853538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.853892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.853921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.854330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.854361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.854658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.854687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.855048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.855077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.855428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.855459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.855816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.855846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.856214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.856245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.856498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.856529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.856891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.856920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.857278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.857309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.857668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.857697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.858057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.858086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.858453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.858483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.858797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.858827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.859191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.859221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.859612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.859641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.860009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.860039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.860392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.860424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.860800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.860829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.861124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.861153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.861452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.861485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.861851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.861880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.862243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.955 [2024-11-25 14:33:00.862273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.955 qpair failed and we were unable to recover it. 00:34:55.955 [2024-11-25 14:33:00.862638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.862668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.863030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.863059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.863413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.863443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.863827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.863857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.864219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.864251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.864624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.864655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.865024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.865054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.865404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.865435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.865671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.865707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.866079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.866109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.866474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.866504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.866742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.866772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.867122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.867151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.867521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.867550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.867914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.867942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.868324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.868355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.868706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.868737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.869097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.869127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.869482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.869512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.869850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.869881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.870239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.870270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.870688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.870717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.871060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.871090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.871523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.871554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.871914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.871943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.872322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.872352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.872715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.872744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.873179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.873209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.873565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.873594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.956 qpair failed and we were unable to recover it. 00:34:55.956 [2024-11-25 14:33:00.873841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.956 [2024-11-25 14:33:00.873873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.874241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.874272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.874630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.874660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.875074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.875103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.875472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.875505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.875853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.875883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.876252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.876284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.876649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.876679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.876935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.876964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.877354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.877385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.877740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.877771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.878137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.878175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.878559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.878588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.878917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.878946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.879304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.879335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.879697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.879726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.880097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.880126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.880498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.880529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.880878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.880907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.881246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.881284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.881528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.881557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.881975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.882004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.882405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.882436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.882798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.882827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.883178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.883209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.883549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.883579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.883953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.883982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.884351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.884382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.957 [2024-11-25 14:33:00.884745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.957 [2024-11-25 14:33:00.884774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.957 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.885131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.885170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.885521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.885550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.885902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.885932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.886278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.886309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.886680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.886710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.887074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.887105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.887561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.887592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.887925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.887955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.888200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.888232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.888578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.888608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.888948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.888981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.889325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.889358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.889735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.889764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.890129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.890169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.890604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.890636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.890996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.891029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.891329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.891361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.891738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.891769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.892123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.892153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.892452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.892483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.892840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.892870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.893235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.893266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.893634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.893665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.894001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.894038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.894424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.894455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.894807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.894841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.895197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.895232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.895591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.895623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.895959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.958 [2024-11-25 14:33:00.895990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.958 qpair failed and we were unable to recover it. 00:34:55.958 [2024-11-25 14:33:00.896365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.896399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.896767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.896813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.897178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.897210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.897568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.897601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.897944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.897974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.898317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.898347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.898708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.898737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.899103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.899132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.899447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.899477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.899722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.899751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.900187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.900217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.900596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.900624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.900976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.901005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.901371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.901402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.901775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.901804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.902169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.902200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.902456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.902484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.902828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.902857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.903236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.903266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.903612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.903642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.903891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.903920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.904290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.904321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.904709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.904739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.905096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.905125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.905495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.905525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.905877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.905908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.906242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.906272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.959 qpair failed and we were unable to recover it. 00:34:55.959 [2024-11-25 14:33:00.906642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.959 [2024-11-25 14:33:00.906670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.907036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.907070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.907409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.907440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.907677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.907705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.908075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.908104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.908435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.908473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.908815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.908844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.909088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.909118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.909491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.909522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.909884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.909912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.910282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.910312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.910562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.910593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.910965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.910995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.911366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.911397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.911763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.911791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.912155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.912195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.912525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.912554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.912894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.912923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.913287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.913316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.913679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.913709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.914076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.914105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.914476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.914505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.914870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.914899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.915269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.915301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.915583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.915612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.915968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.915996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.916444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.916475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.916815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.916845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.917203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.917235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.917564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.917595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.917960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.960 [2024-11-25 14:33:00.917989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.960 qpair failed and we were unable to recover it. 00:34:55.960 [2024-11-25 14:33:00.918250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.918279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.918652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.918681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.918975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.919004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.919343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.919373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.919686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.919715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.920086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.920114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.920476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.920506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.920873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.920901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.921279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.921310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.921665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.921694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.922042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.922077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.922418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.922447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.922782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.922812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.923187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.923217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.923547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.923577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.923810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.923842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.924228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.924259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.924633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.924662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.925048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.925077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.925418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.925449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.925795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.925823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.926072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.926100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.926428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.926459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.926864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.926893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.927250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.927281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.927662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.927692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.928058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.928087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.928495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.928525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.928870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.928900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.929258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.929289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.929670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.929699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.930059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.930089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.930459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.961 [2024-11-25 14:33:00.930491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.961 qpair failed and we were unable to recover it. 00:34:55.961 [2024-11-25 14:33:00.930849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.930879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.931238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.931268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.931616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.931647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.932011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.932040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.932336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.932366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.932738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.932768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.933126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.933155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.933528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.933557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.933932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.933961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.934326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.934355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.934717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.934745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.935109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.935137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.935572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.935601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.935957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.935986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.936352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.936383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.936730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.936759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.937129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.937166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.937522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.937557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.937916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.937943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.938195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.938225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.938497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.938526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.938881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.938911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.939121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.939153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.939504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.939533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.939895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.939926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.940295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.940328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.940691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.940721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.941095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.941125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.962 [2024-11-25 14:33:00.941521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.962 [2024-11-25 14:33:00.941553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.962 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.941917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.941947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.942307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.942339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.942698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.942729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.943089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.943119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.943538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.943570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.943920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.943951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.944306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.944338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.944696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.944727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.945087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.945118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.945481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.945512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.945797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.945827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.946183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.946217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.946573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.946603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.946950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.946980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.947359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.947390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.947758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.947789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.948181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.948212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.948576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.948606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.948990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.949020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.949378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.949411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.949773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.949803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.950183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.950214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.950560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.950590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.951000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.951030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.951390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.951422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.951783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.951812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.952193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.952227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.952586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.952617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.952985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.953021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.953383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.963 [2024-11-25 14:33:00.953415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.963 qpair failed and we were unable to recover it. 00:34:55.963 [2024-11-25 14:33:00.953795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.953825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.954186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.954216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.954621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.954651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.955011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.955041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.955416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.955447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.955792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.955821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.956066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.956100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.956479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.956510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.956883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.956914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.957287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.957319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.957575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.957606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.957953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.957985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.958350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.958382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.958764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.958793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.959222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.959253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.959629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.959661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.960039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.960070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.960433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.960463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.960864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.960894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.961255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.961288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.961649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.961680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.962036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.962066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.962312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.962342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.962622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.962651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.963029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.963059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.963395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.963427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.963686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.963714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.964071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.964100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.964494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.964527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.964 [2024-11-25 14:33:00.964907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.964 [2024-11-25 14:33:00.964937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.964 qpair failed and we were unable to recover it. 00:34:55.965 [2024-11-25 14:33:00.965298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.965 [2024-11-25 14:33:00.965330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.965 qpair failed and we were unable to recover it. 00:34:55.965 [2024-11-25 14:33:00.965709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.965 [2024-11-25 14:33:00.965740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.965 qpair failed and we were unable to recover it. 00:34:55.965 [2024-11-25 14:33:00.966105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.965 [2024-11-25 14:33:00.966138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.965 qpair failed and we were unable to recover it. 00:34:55.965 [2024-11-25 14:33:00.966517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.965 [2024-11-25 14:33:00.966550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.965 qpair failed and we were unable to recover it. 00:34:55.965 [2024-11-25 14:33:00.966906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.965 [2024-11-25 14:33:00.966940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.965 qpair failed and we were unable to recover it. 00:34:55.965 [2024-11-25 14:33:00.967278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.965 [2024-11-25 14:33:00.967310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.965 qpair failed and we were unable to recover it. 00:34:55.965 [2024-11-25 14:33:00.967667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.965 [2024-11-25 14:33:00.967698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.965 qpair failed and we were unable to recover it. 00:34:55.965 [2024-11-25 14:33:00.968051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.965 [2024-11-25 14:33:00.968080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.965 qpair failed and we were unable to recover it. 00:34:55.965 [2024-11-25 14:33:00.968423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.965 [2024-11-25 14:33:00.968460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.965 qpair failed and we were unable to recover it. 00:34:55.965 [2024-11-25 14:33:00.968698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.965 [2024-11-25 14:33:00.968727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.965 qpair failed and we were unable to recover it. 00:34:55.965 [2024-11-25 14:33:00.969082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.965 [2024-11-25 14:33:00.969112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.965 qpair failed and we were unable to recover it. 00:34:55.965 [2024-11-25 14:33:00.969477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.965 [2024-11-25 14:33:00.969509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.965 qpair failed and we were unable to recover it. 00:34:55.965 [2024-11-25 14:33:00.969744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.965 [2024-11-25 14:33:00.969777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.965 qpair failed and we were unable to recover it. 00:34:55.965 [2024-11-25 14:33:00.970133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.965 [2024-11-25 14:33:00.970172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.965 qpair failed and we were unable to recover it. 00:34:55.965 [2024-11-25 14:33:00.970522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.965 [2024-11-25 14:33:00.970552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.965 qpair failed and we were unable to recover it. 00:34:55.965 [2024-11-25 14:33:00.970919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.965 [2024-11-25 14:33:00.970949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.965 qpair failed and we were unable to recover it. 00:34:55.965 [2024-11-25 14:33:00.971303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.965 [2024-11-25 14:33:00.971335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.965 qpair failed and we were unable to recover it. 00:34:55.965 [2024-11-25 14:33:00.971722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.965 [2024-11-25 14:33:00.971752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.965 qpair failed and we were unable to recover it. 00:34:55.965 [2024-11-25 14:33:00.972100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.965 [2024-11-25 14:33:00.972130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.965 qpair failed and we were unable to recover it. 00:34:55.965 [2024-11-25 14:33:00.972523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.965 [2024-11-25 14:33:00.972554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.965 qpair failed and we were unable to recover it. 00:34:55.965 [2024-11-25 14:33:00.972896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.965 [2024-11-25 14:33:00.972926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.965 qpair failed and we were unable to recover it. 00:34:55.965 [2024-11-25 14:33:00.973292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.965 [2024-11-25 14:33:00.973323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.965 qpair failed and we were unable to recover it. 00:34:55.965 [2024-11-25 14:33:00.973697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.965 [2024-11-25 14:33:00.973727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.965 qpair failed and we were unable to recover it. 00:34:55.965 [2024-11-25 14:33:00.974091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.965 [2024-11-25 14:33:00.974122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.965 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.974539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.974570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.974828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.974859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.975237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.975268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.975609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.975638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.976009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.976040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.976291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.976324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.976673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.976703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.977042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.977072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.977413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.977443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.977836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.977865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.978243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.978273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.978656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.978685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.979093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.979124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.979500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.979531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.979893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.979922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.980292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.980323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.980667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.980696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.981079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.981107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.981472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.981502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.981866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.981895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.982260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.982289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.982657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.982686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.983047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.983077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.983443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.983474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.983841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.983876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.984234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.984264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.984658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.984687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.985058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.985087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.985447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.966 [2024-11-25 14:33:00.985477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.966 qpair failed and we were unable to recover it. 00:34:55.966 [2024-11-25 14:33:00.985837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.985866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.986209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.986239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.986599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.986628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.986989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.987019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.987386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.987416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.987782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.987810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.988195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.988227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.988600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.988629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.988995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.989024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.989391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.989422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.989864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.989894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.990246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.990276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.990652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.990681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.991020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.991048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.991395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.991425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.991858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.991887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.992247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.992276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.992595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.992624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.992985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.993016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.993369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.993399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.993764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.993794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.994153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.994192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.994423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.994455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.994753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.994782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.995146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.995184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.995568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.995597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.995951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.995980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.996235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.996265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.996588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.996617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.996988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.997017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.997395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.997425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.997783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.997812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.998070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.967 [2024-11-25 14:33:00.998098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.967 qpair failed and we were unable to recover it. 00:34:55.967 [2024-11-25 14:33:00.998445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:00.998475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:00.998845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:00.998874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:00.999242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:00.999278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:00.999637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:00.999666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.000026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.000055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.000391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.000422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.000780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.000809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.001180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.001212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.001570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.001599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.001959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.001987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.002358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.002388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.002751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.002779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.003024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.003056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.003312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.003343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.003597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.003628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.003993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.004021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.004421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.004452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.004795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.004825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.005180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.005211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.005576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.005605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.005966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.005995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.006341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.006371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.006781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.006810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.007049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.007080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.007507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.007537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.007910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.007938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.008302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.008332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.008706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.008737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.009098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.009127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.009511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.009542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.968 [2024-11-25 14:33:01.009905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.968 [2024-11-25 14:33:01.009934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.968 qpair failed and we were unable to recover it. 00:34:55.969 [2024-11-25 14:33:01.010280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.969 [2024-11-25 14:33:01.010310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.969 qpair failed and we were unable to recover it. 00:34:55.969 [2024-11-25 14:33:01.010682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.969 [2024-11-25 14:33:01.010711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.969 qpair failed and we were unable to recover it. 00:34:55.969 [2024-11-25 14:33:01.011077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.969 [2024-11-25 14:33:01.011106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.969 qpair failed and we were unable to recover it. 00:34:55.969 [2024-11-25 14:33:01.011453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.969 [2024-11-25 14:33:01.011483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.969 qpair failed and we were unable to recover it. 00:34:55.969 [2024-11-25 14:33:01.011847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.969 [2024-11-25 14:33:01.011876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.969 qpair failed and we were unable to recover it. 00:34:55.969 [2024-11-25 14:33:01.012244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.969 [2024-11-25 14:33:01.012274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.969 qpair failed and we were unable to recover it. 00:34:55.969 [2024-11-25 14:33:01.012636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.969 [2024-11-25 14:33:01.012665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.969 qpair failed and we were unable to recover it. 00:34:55.969 [2024-11-25 14:33:01.012997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.969 [2024-11-25 14:33:01.013025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.969 qpair failed and we were unable to recover it. 00:34:55.969 [2024-11-25 14:33:01.013402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:55.969 [2024-11-25 14:33:01.013432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:55.969 qpair failed and we were unable to recover it. 00:34:56.243 [2024-11-25 14:33:01.013798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.243 [2024-11-25 14:33:01.013829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.243 qpair failed and we were unable to recover it. 00:34:56.243 [2024-11-25 14:33:01.014188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.243 [2024-11-25 14:33:01.014223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.243 qpair failed and we were unable to recover it. 00:34:56.243 [2024-11-25 14:33:01.014561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.243 [2024-11-25 14:33:01.014596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.243 qpair failed and we were unable to recover it. 00:34:56.243 [2024-11-25 14:33:01.014943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.243 [2024-11-25 14:33:01.014971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.243 qpair failed and we were unable to recover it. 00:34:56.243 [2024-11-25 14:33:01.015243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.243 [2024-11-25 14:33:01.015274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.243 qpair failed and we were unable to recover it. 00:34:56.243 [2024-11-25 14:33:01.015644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.243 [2024-11-25 14:33:01.015672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.243 qpair failed and we were unable to recover it. 00:34:56.243 [2024-11-25 14:33:01.017364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.243 [2024-11-25 14:33:01.017435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.243 qpair failed and we were unable to recover it. 00:34:56.243 [2024-11-25 14:33:01.017826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.243 [2024-11-25 14:33:01.017857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.243 qpair failed and we were unable to recover it. 00:34:56.243 [2024-11-25 14:33:01.018215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.243 [2024-11-25 14:33:01.018247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.243 qpair failed and we were unable to recover it. 00:34:56.243 [2024-11-25 14:33:01.018611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.018640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.018883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.018916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.019259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.019289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.019656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.019687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.020045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.020074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.020464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.020495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.020790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.020821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.021184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.021216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.021574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.021604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.022031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.022060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.022420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.022452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.022844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.022873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.023226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.023256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.023630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.023659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.023968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.023997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.024341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.024371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.024623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.024655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.024913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.024944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.025291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.025322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.025665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.025695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.026059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.026089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.026344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.026375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.026732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.026762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.027053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.027083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.027482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.027512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.027855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.027884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.028127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.028184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.028569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.028599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.028971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.029001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.029366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.029397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.029648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.029681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.030035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.030064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.030407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.244 [2024-11-25 14:33:01.030438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.244 qpair failed and we were unable to recover it. 00:34:56.244 [2024-11-25 14:33:01.030697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.030733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.031113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.031142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.031512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.031542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.031923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.031953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.032323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.032354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.032717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.032747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.033113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.033142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.033553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.033583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.033978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.034008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.034385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.034416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.034778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.034808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.035055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.035086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.035470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.035500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.035871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.035900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.036264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.036294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.036663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.036692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.037055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.037084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.037426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.037456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.037814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.037843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.038148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.038191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.038518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.038547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.038905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.038935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.039283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.039315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.039689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.039720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.040068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.040097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.040450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.040482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.040845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.040875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.041229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.041260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.041656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.041687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.042052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.042081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.042347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.042377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.042774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.042803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.245 [2024-11-25 14:33:01.043177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.245 [2024-11-25 14:33:01.043209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.245 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.043554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.043583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.043824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.043856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.044212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.044244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.044593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.044622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.044983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.045013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.045438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.045469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.045859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.045889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.046289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.046342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.046709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.046739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.047100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.047129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.047517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.047547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.047914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.047945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.048305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.048336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.048629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.048659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.049030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.049060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.049443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.049474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.049835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.049867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.050238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.050269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.050618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.050647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.051016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.051046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.051429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.051460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.051859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.051889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.052288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.052320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.052673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.052703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.052952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.052981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.053335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.053366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.053674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.053704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.246 [2024-11-25 14:33:01.054085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.246 [2024-11-25 14:33:01.054115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.246 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.054474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.054505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.054887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.054917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.055272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.055303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.055672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.055702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.056062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.056093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.056459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.056491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.056851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.056881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.057126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.057166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.057463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.057493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.057845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.057875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.058243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.058274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.058677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.058707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.059061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.059093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.059466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.059498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.059856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.059886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.060250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.060281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.060653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.060682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.060979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.061009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.061250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.061281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.061610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.061646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.061982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.062012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.062386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.062416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.062783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.062813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.062931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.062962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.063310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.063341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.063701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.063730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.247 [2024-11-25 14:33:01.064090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.247 [2024-11-25 14:33:01.064120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.247 qpair failed and we were unable to recover it. 00:34:56.248 [2024-11-25 14:33:01.064502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-25 14:33:01.064536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.248 qpair failed and we were unable to recover it. 00:34:56.248 [2024-11-25 14:33:01.064886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-25 14:33:01.064916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.248 qpair failed and we were unable to recover it. 00:34:56.248 [2024-11-25 14:33:01.065267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-25 14:33:01.065299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.248 qpair failed and we were unable to recover it. 00:34:56.248 [2024-11-25 14:33:01.065662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-25 14:33:01.065691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.248 qpair failed and we were unable to recover it. 00:34:56.248 [2024-11-25 14:33:01.066051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-25 14:33:01.066080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.248 qpair failed and we were unable to recover it. 00:34:56.248 [2024-11-25 14:33:01.066489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-25 14:33:01.066520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.248 qpair failed and we were unable to recover it. 00:34:56.248 [2024-11-25 14:33:01.066901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-25 14:33:01.066932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.248 qpair failed and we were unable to recover it. 00:34:56.248 [2024-11-25 14:33:01.067311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-25 14:33:01.067343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.248 qpair failed and we were unable to recover it. 00:34:56.248 [2024-11-25 14:33:01.067702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-25 14:33:01.067733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.248 qpair failed and we were unable to recover it. 00:34:56.248 [2024-11-25 14:33:01.068036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-25 14:33:01.068066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.248 qpair failed and we were unable to recover it. 00:34:56.248 [2024-11-25 14:33:01.068416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-25 14:33:01.068448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.248 qpair failed and we were unable to recover it. 00:34:56.248 [2024-11-25 14:33:01.068807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-25 14:33:01.068836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.248 qpair failed and we were unable to recover it. 00:34:56.248 [2024-11-25 14:33:01.069200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-25 14:33:01.069231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.248 qpair failed and we were unable to recover it. 00:34:56.248 [2024-11-25 14:33:01.069502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-25 14:33:01.069531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.248 qpair failed and we were unable to recover it. 00:34:56.248 [2024-11-25 14:33:01.069887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-25 14:33:01.069916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.248 qpair failed and we were unable to recover it. 00:34:56.248 [2024-11-25 14:33:01.070264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-25 14:33:01.070295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.248 qpair failed and we were unable to recover it. 00:34:56.248 [2024-11-25 14:33:01.070629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-25 14:33:01.070658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.248 qpair failed and we were unable to recover it. 00:34:56.248 [2024-11-25 14:33:01.071028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-25 14:33:01.071057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.248 qpair failed and we were unable to recover it. 00:34:56.248 [2024-11-25 14:33:01.071422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-25 14:33:01.071452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.248 qpair failed and we were unable to recover it. 00:34:56.248 [2024-11-25 14:33:01.071811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-25 14:33:01.071841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.248 qpair failed and we were unable to recover it. 00:34:56.248 [2024-11-25 14:33:01.072184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-25 14:33:01.072214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.248 qpair failed and we were unable to recover it. 00:34:56.248 [2024-11-25 14:33:01.072595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-25 14:33:01.072625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.248 qpair failed and we were unable to recover it. 00:34:56.248 [2024-11-25 14:33:01.072982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-25 14:33:01.073011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.248 qpair failed and we were unable to recover it. 00:34:56.248 [2024-11-25 14:33:01.073389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-25 14:33:01.073420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.248 qpair failed and we were unable to recover it. 00:34:56.248 [2024-11-25 14:33:01.073783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-25 14:33:01.073813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.248 qpair failed and we were unable to recover it. 00:34:56.248 [2024-11-25 14:33:01.074175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.248 [2024-11-25 14:33:01.074205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.249 qpair failed and we were unable to recover it. 00:34:56.249 [2024-11-25 14:33:01.074573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-25 14:33:01.074602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.249 qpair failed and we were unable to recover it. 00:34:56.249 [2024-11-25 14:33:01.074891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-25 14:33:01.074922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.249 qpair failed and we were unable to recover it. 00:34:56.249 [2024-11-25 14:33:01.075272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-25 14:33:01.075302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.249 qpair failed and we were unable to recover it. 00:34:56.249 [2024-11-25 14:33:01.075684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-25 14:33:01.075713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.249 qpair failed and we were unable to recover it. 00:34:56.249 [2024-11-25 14:33:01.076085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-25 14:33:01.076115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.249 qpair failed and we were unable to recover it. 00:34:56.249 [2024-11-25 14:33:01.076519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-25 14:33:01.076551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.249 qpair failed and we were unable to recover it. 00:34:56.249 [2024-11-25 14:33:01.076891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-25 14:33:01.076927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.249 qpair failed and we were unable to recover it. 00:34:56.249 [2024-11-25 14:33:01.077275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-25 14:33:01.077306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.249 qpair failed and we were unable to recover it. 00:34:56.249 [2024-11-25 14:33:01.077638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-25 14:33:01.077668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.249 qpair failed and we were unable to recover it. 00:34:56.249 [2024-11-25 14:33:01.078031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-25 14:33:01.078060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.249 qpair failed and we were unable to recover it. 00:34:56.249 [2024-11-25 14:33:01.078436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-25 14:33:01.078467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.249 qpair failed and we were unable to recover it. 00:34:56.249 [2024-11-25 14:33:01.078828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-25 14:33:01.078857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.249 qpair failed and we were unable to recover it. 00:34:56.249 [2024-11-25 14:33:01.079223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-25 14:33:01.079253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.249 qpair failed and we were unable to recover it. 00:34:56.249 [2024-11-25 14:33:01.079593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-25 14:33:01.079623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.249 qpair failed and we were unable to recover it. 00:34:56.249 [2024-11-25 14:33:01.080021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-25 14:33:01.080049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.249 qpair failed and we were unable to recover it. 00:34:56.249 [2024-11-25 14:33:01.080420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-25 14:33:01.080450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.249 qpair failed and we were unable to recover it. 00:34:56.249 [2024-11-25 14:33:01.080676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-25 14:33:01.080707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.249 qpair failed and we were unable to recover it. 00:34:56.249 [2024-11-25 14:33:01.081123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-25 14:33:01.081152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.249 qpair failed and we were unable to recover it. 00:34:56.249 [2024-11-25 14:33:01.081512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.249 [2024-11-25 14:33:01.081541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.249 qpair failed and we were unable to recover it. 00:34:56.249 [2024-11-25 14:33:01.081902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.250 [2024-11-25 14:33:01.081931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.250 qpair failed and we were unable to recover it. 00:34:56.250 [2024-11-25 14:33:01.082291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.250 [2024-11-25 14:33:01.082322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.250 qpair failed and we were unable to recover it. 00:34:56.250 [2024-11-25 14:33:01.082680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.250 [2024-11-25 14:33:01.082709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.250 qpair failed and we were unable to recover it. 00:34:56.250 [2024-11-25 14:33:01.083074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.250 [2024-11-25 14:33:01.083103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.250 qpair failed and we were unable to recover it. 00:34:56.250 [2024-11-25 14:33:01.083462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.250 [2024-11-25 14:33:01.083492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.250 qpair failed and we were unable to recover it. 00:34:56.250 [2024-11-25 14:33:01.083853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.250 [2024-11-25 14:33:01.083883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.250 qpair failed and we were unable to recover it. 00:34:56.250 [2024-11-25 14:33:01.084250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.250 [2024-11-25 14:33:01.084376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.250 qpair failed and we were unable to recover it. 00:34:56.250 [2024-11-25 14:33:01.084742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.250 [2024-11-25 14:33:01.084772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.250 qpair failed and we were unable to recover it. 00:34:56.250 [2024-11-25 14:33:01.085133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.250 [2024-11-25 14:33:01.085174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.250 qpair failed and we were unable to recover it. 00:34:56.250 [2024-11-25 14:33:01.085576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.250 [2024-11-25 14:33:01.085605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.250 qpair failed and we were unable to recover it. 00:34:56.250 [2024-11-25 14:33:01.086036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.250 [2024-11-25 14:33:01.086066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.250 qpair failed and we were unable to recover it. 00:34:56.250 [2024-11-25 14:33:01.086401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.250 [2024-11-25 14:33:01.086431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.250 qpair failed and we were unable to recover it. 00:34:56.250 [2024-11-25 14:33:01.086800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.250 [2024-11-25 14:33:01.086828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.250 qpair failed and we were unable to recover it. 00:34:56.250 [2024-11-25 14:33:01.087188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.250 [2024-11-25 14:33:01.087219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.250 qpair failed and we were unable to recover it. 00:34:56.250 [2024-11-25 14:33:01.087572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.250 [2024-11-25 14:33:01.087602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.250 qpair failed and we were unable to recover it. 00:34:56.250 [2024-11-25 14:33:01.087963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.250 [2024-11-25 14:33:01.087992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.250 qpair failed and we were unable to recover it. 00:34:56.250 [2024-11-25 14:33:01.088362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.250 [2024-11-25 14:33:01.088392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.250 qpair failed and we were unable to recover it. 00:34:56.250 [2024-11-25 14:33:01.088651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.250 [2024-11-25 14:33:01.088680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.250 qpair failed and we were unable to recover it. 00:34:56.250 [2024-11-25 14:33:01.089036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.250 [2024-11-25 14:33:01.089064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.250 qpair failed and we were unable to recover it. 00:34:56.250 [2024-11-25 14:33:01.089348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.250 [2024-11-25 14:33:01.089378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.250 qpair failed and we were unable to recover it. 00:34:56.250 [2024-11-25 14:33:01.089736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.250 [2024-11-25 14:33:01.089765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.250 qpair failed and we were unable to recover it. 00:34:56.250 [2024-11-25 14:33:01.090131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.250 [2024-11-25 14:33:01.090170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.250 qpair failed and we were unable to recover it. 00:34:56.250 [2024-11-25 14:33:01.090399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.250 [2024-11-25 14:33:01.090428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.250 qpair failed and we were unable to recover it. 00:34:56.250 [2024-11-25 14:33:01.090764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.250 [2024-11-25 14:33:01.090793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.250 qpair failed and we were unable to recover it. 00:34:56.250 [2024-11-25 14:33:01.091156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.250 [2024-11-25 14:33:01.091197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.250 qpair failed and we were unable to recover it. 00:34:56.251 [2024-11-25 14:33:01.091601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.251 [2024-11-25 14:33:01.091631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.251 qpair failed and we were unable to recover it. 00:34:56.251 [2024-11-25 14:33:01.091873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.251 [2024-11-25 14:33:01.091907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.251 qpair failed and we were unable to recover it. 00:34:56.251 [2024-11-25 14:33:01.092264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.251 [2024-11-25 14:33:01.092301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.251 qpair failed and we were unable to recover it. 00:34:56.251 [2024-11-25 14:33:01.092649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.251 [2024-11-25 14:33:01.092678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.251 qpair failed and we were unable to recover it. 00:34:56.251 [2024-11-25 14:33:01.092921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.251 [2024-11-25 14:33:01.092953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.251 qpair failed and we were unable to recover it. 00:34:56.251 [2024-11-25 14:33:01.093323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.251 [2024-11-25 14:33:01.093353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.251 qpair failed and we were unable to recover it. 00:34:56.251 [2024-11-25 14:33:01.093725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.251 [2024-11-25 14:33:01.093754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.251 qpair failed and we were unable to recover it. 00:34:56.251 [2024-11-25 14:33:01.093965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.251 [2024-11-25 14:33:01.093997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.251 qpair failed and we were unable to recover it. 00:34:56.251 [2024-11-25 14:33:01.094237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.251 [2024-11-25 14:33:01.094268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.251 qpair failed and we were unable to recover it. 00:34:56.251 [2024-11-25 14:33:01.094644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.251 [2024-11-25 14:33:01.094673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.251 qpair failed and we were unable to recover it. 00:34:56.251 [2024-11-25 14:33:01.095034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.251 [2024-11-25 14:33:01.095063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.251 qpair failed and we were unable to recover it. 00:34:56.251 [2024-11-25 14:33:01.095413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.251 [2024-11-25 14:33:01.095442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.251 qpair failed and we were unable to recover it. 00:34:56.251 [2024-11-25 14:33:01.095802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.251 [2024-11-25 14:33:01.095831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.251 qpair failed and we were unable to recover it. 00:34:56.251 [2024-11-25 14:33:01.096195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.251 [2024-11-25 14:33:01.096227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.251 qpair failed and we were unable to recover it. 00:34:56.251 [2024-11-25 14:33:01.096586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.251 [2024-11-25 14:33:01.096615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.251 qpair failed and we were unable to recover it. 00:34:56.251 [2024-11-25 14:33:01.096954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.251 [2024-11-25 14:33:01.096983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.251 qpair failed and we were unable to recover it. 00:34:56.251 [2024-11-25 14:33:01.097349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.251 [2024-11-25 14:33:01.097380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.251 qpair failed and we were unable to recover it. 00:34:56.251 [2024-11-25 14:33:01.097741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.251 [2024-11-25 14:33:01.097770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.251 qpair failed and we were unable to recover it. 00:34:56.251 [2024-11-25 14:33:01.098128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.251 [2024-11-25 14:33:01.098157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.251 qpair failed and we were unable to recover it. 00:34:56.251 [2024-11-25 14:33:01.098532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.251 [2024-11-25 14:33:01.098561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.251 qpair failed and we were unable to recover it. 00:34:56.251 [2024-11-25 14:33:01.098933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.251 [2024-11-25 14:33:01.098962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.251 qpair failed and we were unable to recover it. 00:34:56.251 [2024-11-25 14:33:01.099214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.251 [2024-11-25 14:33:01.099245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.251 qpair failed and we were unable to recover it. 00:34:56.251 [2024-11-25 14:33:01.099633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.251 [2024-11-25 14:33:01.099663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.251 qpair failed and we were unable to recover it. 00:34:56.251 [2024-11-25 14:33:01.100024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.251 [2024-11-25 14:33:01.100053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.251 qpair failed and we were unable to recover it. 00:34:56.251 [2024-11-25 14:33:01.100399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.251 [2024-11-25 14:33:01.100430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.251 qpair failed and we were unable to recover it. 00:34:56.251 [2024-11-25 14:33:01.100789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.251 [2024-11-25 14:33:01.100817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.251 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.101189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.101219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.101568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.101597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.101962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.101991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.102343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.102379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.102710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.102740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.103103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.103132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.103483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.103514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.103759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.103789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.104148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.104189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.104553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.104582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.104944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.104973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.105338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.105368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.105738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.105767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.106059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.106088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.106533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.106564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.106924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.106953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.107301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.107331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.107707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.107737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.108082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.108110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.108494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.108525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.108867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.108896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.109259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.109290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.109658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.109687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.110051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.110079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.110318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.110351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.110720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.110748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.111111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.111140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.111525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.111554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.111916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.111945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.112312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.252 [2024-11-25 14:33:01.112341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.252 qpair failed and we were unable to recover it. 00:34:56.252 [2024-11-25 14:33:01.112706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.112736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.113088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.113118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.113572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.113603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.113941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.113970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.114235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.114266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.114630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.114659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.115011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.115041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.115291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.115324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.115708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.115738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.116111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.116141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.116484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.116514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.116883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.116913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.117273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.117304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.117669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.117708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.118059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.118089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.118345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.118377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.118734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.118763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.119121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.119150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.119589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.119618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.119919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.119948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.120179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.120211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.120554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.120583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.120937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.120968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.121331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.121361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.121698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.121727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.122088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.122119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.122499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.122531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.122907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.122939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.123302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.253 [2024-11-25 14:33:01.123333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.253 qpair failed and we were unable to recover it. 00:34:56.253 [2024-11-25 14:33:01.123736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.123765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.124013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.124042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.124425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.124455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.124815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.124844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.125208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.125241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.125619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.125650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.125907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.125936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.126435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.126465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.126803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.126832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.127054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.127087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.127442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.127474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.127845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.127876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.128207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.128238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.128529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.128558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.128918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.128947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.129200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.129234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.129524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.129553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.129903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.129933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.130291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.130322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.130697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.130728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.131077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.131108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.131520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.131550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.131910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.131939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.132208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.132238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.132592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.132628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.132987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.133017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.133383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.133414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.133783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.133813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.134171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.134203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.134539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.254 [2024-11-25 14:33:01.134567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.254 qpair failed and we were unable to recover it. 00:34:56.254 [2024-11-25 14:33:01.134932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.255 [2024-11-25 14:33:01.134962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.255 qpair failed and we were unable to recover it. 00:34:56.255 [2024-11-25 14:33:01.135336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.255 [2024-11-25 14:33:01.135366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.255 qpair failed and we were unable to recover it. 00:34:56.255 [2024-11-25 14:33:01.135639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.255 [2024-11-25 14:33:01.135669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.255 qpair failed and we were unable to recover it. 00:34:56.255 [2024-11-25 14:33:01.136030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.255 [2024-11-25 14:33:01.136060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.255 qpair failed and we were unable to recover it. 00:34:56.255 [2024-11-25 14:33:01.136405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.255 [2024-11-25 14:33:01.136438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.255 qpair failed and we were unable to recover it. 00:34:56.255 [2024-11-25 14:33:01.136777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.255 [2024-11-25 14:33:01.136806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.255 qpair failed and we were unable to recover it. 00:34:56.255 [2024-11-25 14:33:01.137178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.255 [2024-11-25 14:33:01.137209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.255 qpair failed and we were unable to recover it. 00:34:56.255 [2024-11-25 14:33:01.137609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.255 [2024-11-25 14:33:01.137639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.255 qpair failed and we were unable to recover it. 00:34:56.255 [2024-11-25 14:33:01.137874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.255 [2024-11-25 14:33:01.137903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.255 qpair failed and we were unable to recover it. 00:34:56.255 [2024-11-25 14:33:01.138247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.255 [2024-11-25 14:33:01.138278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.255 qpair failed and we were unable to recover it. 00:34:56.255 [2024-11-25 14:33:01.138650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.255 [2024-11-25 14:33:01.138680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.255 qpair failed and we were unable to recover it. 00:34:56.255 [2024-11-25 14:33:01.139042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.255 [2024-11-25 14:33:01.139072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.255 qpair failed and we were unable to recover it. 00:34:56.255 [2024-11-25 14:33:01.139411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.255 [2024-11-25 14:33:01.139441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.255 qpair failed and we were unable to recover it. 00:34:56.255 [2024-11-25 14:33:01.139801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.255 [2024-11-25 14:33:01.139831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.255 qpair failed and we were unable to recover it. 00:34:56.255 [2024-11-25 14:33:01.140191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.255 [2024-11-25 14:33:01.140222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.255 qpair failed and we were unable to recover it. 00:34:56.255 [2024-11-25 14:33:01.140601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.255 [2024-11-25 14:33:01.140629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.255 qpair failed and we were unable to recover it. 00:34:56.255 [2024-11-25 14:33:01.140993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.255 [2024-11-25 14:33:01.141024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.255 qpair failed and we were unable to recover it. 00:34:56.255 [2024-11-25 14:33:01.141413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.255 [2024-11-25 14:33:01.141444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.255 qpair failed and we were unable to recover it. 00:34:56.255 [2024-11-25 14:33:01.141807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.255 [2024-11-25 14:33:01.141836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.255 qpair failed and we were unable to recover it. 00:34:56.255 [2024-11-25 14:33:01.142096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.255 [2024-11-25 14:33:01.142126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.255 qpair failed and we were unable to recover it. 00:34:56.255 [2024-11-25 14:33:01.142545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.255 [2024-11-25 14:33:01.142575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.255 qpair failed and we were unable to recover it. 00:34:56.255 [2024-11-25 14:33:01.142945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.255 [2024-11-25 14:33:01.142975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.255 qpair failed and we were unable to recover it. 00:34:56.255 [2024-11-25 14:33:01.143373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.255 [2024-11-25 14:33:01.143404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.255 qpair failed and we were unable to recover it. 00:34:56.255 [2024-11-25 14:33:01.143742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.255 [2024-11-25 14:33:01.143773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.255 qpair failed and we were unable to recover it. 00:34:56.255 [2024-11-25 14:33:01.144124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.255 [2024-11-25 14:33:01.144153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.255 qpair failed and we were unable to recover it. 00:34:56.255 [2024-11-25 14:33:01.144520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.255 [2024-11-25 14:33:01.144550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.255 qpair failed and we were unable to recover it. 00:34:56.255 [2024-11-25 14:33:01.144920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.144951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.145307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.145338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.145587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.145619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.145984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.146014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.146380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.146411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.146767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.146796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.147153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.147192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.147556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.147589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.147890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.147926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.148259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.148290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.148662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.148692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.149066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.149095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.149406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.149438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.149797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.149827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.150180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.150212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.150546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.150576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.150938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.150968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.151180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.151215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.151462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.151492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.151734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.151766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.152137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.152178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.152580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.152610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.152974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.153004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.153391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.153423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.153849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.153879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.154235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.154267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.154627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.154656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.155016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.155046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.155459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.155490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.155855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.155885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.156126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.156156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.156553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.156583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.256 qpair failed and we were unable to recover it. 00:34:56.256 [2024-11-25 14:33:01.156936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.256 [2024-11-25 14:33:01.156967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.157333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.157364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.157733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.157763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.158126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.158156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.158505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.158535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.158895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.158924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.159294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.159326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.159661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.159690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.160055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.160085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.160336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.160370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.160727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.160758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.161125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.161154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.161582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.161616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.161950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.161980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.162326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.162357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.162607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.162640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.162994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.163031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.163400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.163431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.163801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.163830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.164194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.164224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.164565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.164594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.164968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.164997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.165246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.165279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.165678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.165707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.166068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.166097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.166458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.166489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.166851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.166881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.167246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.167276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.167682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.167711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.168039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.168068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.168415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.257 [2024-11-25 14:33:01.168445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.257 qpair failed and we were unable to recover it. 00:34:56.257 [2024-11-25 14:33:01.168806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.168836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.169197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.169227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.169462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.169495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.169853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.169882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.170142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.170202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.170583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.170612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.170954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.170983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.171381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.171412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.171751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.171779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.172137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.172173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.172544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.172574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.172938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.172966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.173326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.173356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.173608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.173638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.173991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.174021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.174423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.174454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.174705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.174737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.175102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.175132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.175532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.175563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.175927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.175957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.176317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.176348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.176712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.176741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.177105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.177133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.177520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.177550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.177717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.177748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.178147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.178192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.178557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.178586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.178944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.178973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.179343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.179374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.179740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.258 [2024-11-25 14:33:01.179770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.258 qpair failed and we were unable to recover it. 00:34:56.258 [2024-11-25 14:33:01.180132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.180171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.180536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.180565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.180874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.180902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.181269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.181300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.181671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.181700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.182103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.182133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.182538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.182568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.182928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.182957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.183315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.183346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.183721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.183750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.184003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.184034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.184400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.184430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.184794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.184824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.185187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.185216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.185447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.185479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.185923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.185953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.186306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.186336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.186683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.186713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.187010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.187039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.187385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.187415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.187776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.187805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.188176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.188207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.188477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.188506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.188885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.188915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.189275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.189307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.189683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.189711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.190070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.190099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.190476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.190505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.190878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.190907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.191273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.191303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.259 qpair failed and we were unable to recover it. 00:34:56.259 [2024-11-25 14:33:01.191546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.259 [2024-11-25 14:33:01.191575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.191940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.191968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.192344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.192374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.192736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.192766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.193013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.193042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.193434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.193471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.193828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.193857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.194175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.194206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.194604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.194632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.195000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.195029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.195383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.195414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.195778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.195806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.196171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.196202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.196565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.196594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.196964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.196992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.197347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.197377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.197679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.197709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.198071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.198100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.198526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.198558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.198968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.198997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.199358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.199388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.199752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.199782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.200143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.200181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.200535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.200564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.200926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.200956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.201320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.201351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.201720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.201750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.202104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.202134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.202579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.202608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.202948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.202977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.203351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.260 [2024-11-25 14:33:01.203381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.260 qpair failed and we were unable to recover it. 00:34:56.260 [2024-11-25 14:33:01.203642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.261 [2024-11-25 14:33:01.203673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.261 qpair failed and we were unable to recover it. 00:34:56.261 [2024-11-25 14:33:01.203958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.261 [2024-11-25 14:33:01.203987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.261 qpair failed and we were unable to recover it. 00:34:56.261 [2024-11-25 14:33:01.204355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.261 [2024-11-25 14:33:01.204384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.261 qpair failed and we were unable to recover it. 00:34:56.261 [2024-11-25 14:33:01.204750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.261 [2024-11-25 14:33:01.204779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.261 qpair failed and we were unable to recover it. 00:34:56.261 [2024-11-25 14:33:01.205156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.261 [2024-11-25 14:33:01.205197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.261 qpair failed and we were unable to recover it. 00:34:56.261 [2024-11-25 14:33:01.205566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.261 [2024-11-25 14:33:01.205595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.261 qpair failed and we were unable to recover it. 00:34:56.261 [2024-11-25 14:33:01.205839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.261 [2024-11-25 14:33:01.205871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.261 qpair failed and we were unable to recover it. 00:34:56.261 [2024-11-25 14:33:01.206230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.261 [2024-11-25 14:33:01.206262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.261 qpair failed and we were unable to recover it. 00:34:56.261 [2024-11-25 14:33:01.206613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.261 [2024-11-25 14:33:01.206641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.261 qpair failed and we were unable to recover it. 00:34:56.261 [2024-11-25 14:33:01.206895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.261 [2024-11-25 14:33:01.206927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.261 qpair failed and we were unable to recover it. 00:34:56.261 [2024-11-25 14:33:01.207298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.261 [2024-11-25 14:33:01.207329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.261 qpair failed and we were unable to recover it. 00:34:56.261 [2024-11-25 14:33:01.207762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.261 [2024-11-25 14:33:01.207790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.261 qpair failed and we were unable to recover it. 00:34:56.261 [2024-11-25 14:33:01.208151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.261 [2024-11-25 14:33:01.208189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.261 qpair failed and we were unable to recover it. 00:34:56.261 [2024-11-25 14:33:01.208531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.261 [2024-11-25 14:33:01.208560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.261 qpair failed and we were unable to recover it. 00:34:56.261 [2024-11-25 14:33:01.208941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.261 [2024-11-25 14:33:01.208977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.261 qpair failed and we were unable to recover it. 00:34:56.261 [2024-11-25 14:33:01.209316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.261 [2024-11-25 14:33:01.209347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.261 qpair failed and we were unable to recover it. 00:34:56.261 [2024-11-25 14:33:01.209725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.261 [2024-11-25 14:33:01.209756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.261 qpair failed and we were unable to recover it. 00:34:56.261 [2024-11-25 14:33:01.210099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.261 [2024-11-25 14:33:01.210128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.261 qpair failed and we were unable to recover it. 00:34:56.261 [2024-11-25 14:33:01.210511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.261 [2024-11-25 14:33:01.210542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.261 qpair failed and we were unable to recover it. 00:34:56.261 [2024-11-25 14:33:01.210917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.261 [2024-11-25 14:33:01.210946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.261 qpair failed and we were unable to recover it. 00:34:56.261 [2024-11-25 14:33:01.211204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.261 [2024-11-25 14:33:01.211238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.261 qpair failed and we were unable to recover it. 00:34:56.261 [2024-11-25 14:33:01.211663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.261 [2024-11-25 14:33:01.211692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.261 qpair failed and we were unable to recover it. 00:34:56.261 [2024-11-25 14:33:01.212043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.261 [2024-11-25 14:33:01.212073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.261 qpair failed and we were unable to recover it. 00:34:56.261 [2024-11-25 14:33:01.212418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.261 [2024-11-25 14:33:01.212448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.261 qpair failed and we were unable to recover it. 00:34:56.261 [2024-11-25 14:33:01.212733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.261 [2024-11-25 14:33:01.212763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.213022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.213052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.213425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.213455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.213859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.213888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.214250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.214281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.214555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.214584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.214938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.214967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.215329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.215360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.215721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.215751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.216101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.216132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.216487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.216518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.216853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.216882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.217242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.217273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.217669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.217698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.218062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.218091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.218455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.218487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.218848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.218878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.219237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.219270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.219625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.219654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.220016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.220046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.220387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.220418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.220786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.220815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.221182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.221212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.221607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.221637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.221993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.222023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.222388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.222418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.222695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.222724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.223079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.223108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.223509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.223540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.223882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.262 [2024-11-25 14:33:01.223911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.262 qpair failed and we were unable to recover it. 00:34:56.262 [2024-11-25 14:33:01.224337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.224374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.224753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.224784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.225141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.225180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.225585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.225614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.225956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.225985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.226301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.226331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.226698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.226726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.227083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.227112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.227475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.227506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.227778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.227807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.228171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.228203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.228600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.228629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.229003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.229032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.229383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.229414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.229794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.229823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.230066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.230099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.230448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.230479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.230850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.230879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.231241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.231272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.231667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.231697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.232061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.232090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.232461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.232492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.232829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.232859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.233218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.233249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.233610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.233640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.234003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.234033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.234410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.234441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.234801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.234832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.235193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.263 [2024-11-25 14:33:01.235224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.263 qpair failed and we were unable to recover it. 00:34:56.263 [2024-11-25 14:33:01.235604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.235632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.235976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.236005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.236379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.236410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.236759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.236789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.237140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.237182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.237566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.237598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.237934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.237963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.238267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.238298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.238662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.238691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.239056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.239086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.239433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.239465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.239884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.239920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.240254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.240285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.240633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.240663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.241036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.241067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.241432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.241463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.241842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.241872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.242118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.242148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.242493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.242523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.242899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.242930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.243195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.243227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.243599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.243628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.243990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.244021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.244329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.244359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.244729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.244759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.245118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.245148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.245509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.245539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.245906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.245935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.246279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.246311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.246677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.246707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.247073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.264 [2024-11-25 14:33:01.247103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.264 qpair failed and we were unable to recover it. 00:34:56.264 [2024-11-25 14:33:01.247510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.247542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.247882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.247911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.248177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.248211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.248509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.248539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.248886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.248914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.249280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.249311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.249675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.249705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.249958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.249991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.250322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.250352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.250715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.250743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.251117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.251146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.251513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.251542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.251941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.251969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.252335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.252365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.252680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.252708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.253074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.253104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.253517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.253547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.253951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.253980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.254332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.254362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.254731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.254760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.255118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.255153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.255512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.255541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.255903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.255932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.256290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.256321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.256684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.256712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.257070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.257099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.257398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.257428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.257802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.257832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.258190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.258221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.258594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.258623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.258986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.259017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.259393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.259423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.259784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.265 [2024-11-25 14:33:01.259813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.265 qpair failed and we were unable to recover it. 00:34:56.265 [2024-11-25 14:33:01.260146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.260188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.260569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.260600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.260849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.260878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.261192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.261223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.261581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.261609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.261972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.262000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.262377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.262408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.262665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.262694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.263059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.263090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.263493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.263523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.263893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.263923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.264277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.264308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.264692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.264721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.265065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.265094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.265460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.265491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.265750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.265779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.266170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.266201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.266536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.266565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.266906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.266935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.267278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.267308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.267654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.267684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.268043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.268073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.268413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.268443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.268812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.268840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.269219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.269250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.269633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.269663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.270027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.270056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.270463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.270499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.270836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.270865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.271229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.271261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.271643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.271672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.266 [2024-11-25 14:33:01.272117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.266 [2024-11-25 14:33:01.272146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.266 qpair failed and we were unable to recover it. 00:34:56.267 [2024-11-25 14:33:01.272559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.267 [2024-11-25 14:33:01.272590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.267 qpair failed and we were unable to recover it. 00:34:56.267 [2024-11-25 14:33:01.272952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.267 [2024-11-25 14:33:01.272981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.267 qpair failed and we were unable to recover it. 00:34:56.267 [2024-11-25 14:33:01.273346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.267 [2024-11-25 14:33:01.273377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.267 qpair failed and we were unable to recover it. 00:34:56.267 [2024-11-25 14:33:01.273748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.267 [2024-11-25 14:33:01.273777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.267 qpair failed and we were unable to recover it. 00:34:56.267 [2024-11-25 14:33:01.274144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.267 [2024-11-25 14:33:01.274182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.267 qpair failed and we were unable to recover it. 00:34:56.267 [2024-11-25 14:33:01.274552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.267 [2024-11-25 14:33:01.274583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.267 qpair failed and we were unable to recover it. 00:34:56.267 [2024-11-25 14:33:01.274951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.267 [2024-11-25 14:33:01.274981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.267 qpair failed and we were unable to recover it. 00:34:56.267 [2024-11-25 14:33:01.275332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.267 [2024-11-25 14:33:01.275362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.267 qpair failed and we were unable to recover it. 00:34:56.267 [2024-11-25 14:33:01.275760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.267 [2024-11-25 14:33:01.275789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.267 qpair failed and we were unable to recover it. 00:34:56.267 [2024-11-25 14:33:01.276168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.267 [2024-11-25 14:33:01.276199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.267 qpair failed and we were unable to recover it. 00:34:56.267 [2024-11-25 14:33:01.276605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.267 [2024-11-25 14:33:01.276634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.267 qpair failed and we were unable to recover it. 00:34:56.267 [2024-11-25 14:33:01.277000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.267 [2024-11-25 14:33:01.277028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.267 qpair failed and we were unable to recover it. 00:34:56.267 [2024-11-25 14:33:01.277426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.267 [2024-11-25 14:33:01.277456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.267 qpair failed and we were unable to recover it. 00:34:56.267 [2024-11-25 14:33:01.277817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.267 [2024-11-25 14:33:01.277848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.267 qpair failed and we were unable to recover it. 00:34:56.267 [2024-11-25 14:33:01.278226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.267 [2024-11-25 14:33:01.278258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.267 qpair failed and we were unable to recover it. 00:34:56.267 [2024-11-25 14:33:01.278617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.267 [2024-11-25 14:33:01.278648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.267 qpair failed and we were unable to recover it. 00:34:56.267 [2024-11-25 14:33:01.279062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.267 [2024-11-25 14:33:01.279096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.267 qpair failed and we were unable to recover it. 00:34:56.267 [2024-11-25 14:33:01.279456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.267 [2024-11-25 14:33:01.279487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.267 qpair failed and we were unable to recover it. 00:34:56.267 [2024-11-25 14:33:01.279873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.267 [2024-11-25 14:33:01.279903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.267 qpair failed and we were unable to recover it. 00:34:56.267 [2024-11-25 14:33:01.280262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.267 [2024-11-25 14:33:01.280295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.267 qpair failed and we were unable to recover it. 00:34:56.267 [2024-11-25 14:33:01.282801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.267 [2024-11-25 14:33:01.282861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.267 qpair failed and we were unable to recover it. 00:34:56.267 [2024-11-25 14:33:01.283233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.267 [2024-11-25 14:33:01.283268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.267 qpair failed and we were unable to recover it. 00:34:56.267 [2024-11-25 14:33:01.283687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.267 [2024-11-25 14:33:01.283719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.267 qpair failed and we were unable to recover it. 00:34:56.267 [2024-11-25 14:33:01.284111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.267 [2024-11-25 14:33:01.284141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.267 qpair failed and we were unable to recover it. 00:34:56.267 [2024-11-25 14:33:01.284523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.284554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.284915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.284947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.285303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.285334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.285584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.285616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.285961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.285992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.286348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.286378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.286718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.286750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.287117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.287147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.287532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.287563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.287927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.287958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.288314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.288346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.288714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.288752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.289121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.289151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.289441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.289474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.289835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.289865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.290219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.290253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.290593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.290622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.290859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.290891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.291286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.291317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.291685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.291715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.292082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.292112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.292465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.292499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.292745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.292775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.293116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.293148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.293544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.293576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.293939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.293970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.295989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.296061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.296481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.296518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.296897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.296931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.297199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.297231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.268 qpair failed and we were unable to recover it. 00:34:56.268 [2024-11-25 14:33:01.298426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.268 [2024-11-25 14:33:01.298475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.302190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.302261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.302724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.302767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.303183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.303223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.303691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.303729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.304117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.304171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.304569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.304609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.305002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.305040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.305463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.305505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.305835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.305868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.306187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.306224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.306620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.306650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.306987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.307018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.307422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.307455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.307788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.307818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.308178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.308209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.308569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.308600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.308935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.308969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.309302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.309333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.309717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.309748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.310107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.310141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.310560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.310592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.310961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.310993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.311347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.311378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.311736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.311767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.312130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.312171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.312534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.312561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.312964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.312990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.313236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.313268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.313664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.269 [2024-11-25 14:33:01.313692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.269 qpair failed and we were unable to recover it. 00:34:56.269 [2024-11-25 14:33:01.314068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.270 [2024-11-25 14:33:01.314095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.270 qpair failed and we were unable to recover it. 00:34:56.270 [2024-11-25 14:33:01.314480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.270 [2024-11-25 14:33:01.314509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.270 qpair failed and we were unable to recover it. 00:34:56.270 [2024-11-25 14:33:01.314806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.270 [2024-11-25 14:33:01.314833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.270 qpair failed and we were unable to recover it. 00:34:56.270 [2024-11-25 14:33:01.315209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.270 [2024-11-25 14:33:01.315236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.270 qpair failed and we were unable to recover it. 00:34:56.270 [2024-11-25 14:33:01.315615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.270 [2024-11-25 14:33:01.315643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.270 qpair failed and we were unable to recover it. 00:34:56.270 [2024-11-25 14:33:01.316009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.270 [2024-11-25 14:33:01.316036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.270 qpair failed and we were unable to recover it. 00:34:56.270 [2024-11-25 14:33:01.316398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.270 [2024-11-25 14:33:01.316426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.270 qpair failed and we were unable to recover it. 00:34:56.270 [2024-11-25 14:33:01.316822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.270 [2024-11-25 14:33:01.316849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.270 qpair failed and we were unable to recover it. 00:34:56.270 [2024-11-25 14:33:01.317267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.270 [2024-11-25 14:33:01.317298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.270 qpair failed and we were unable to recover it. 00:34:56.270 [2024-11-25 14:33:01.317659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.270 [2024-11-25 14:33:01.317688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.270 qpair failed and we were unable to recover it. 00:34:56.270 [2024-11-25 14:33:01.318043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.270 [2024-11-25 14:33:01.318072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.270 qpair failed and we were unable to recover it. 00:34:56.270 [2024-11-25 14:33:01.318416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.270 [2024-11-25 14:33:01.318445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.270 qpair failed and we were unable to recover it. 00:34:56.270 [2024-11-25 14:33:01.318810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.270 [2024-11-25 14:33:01.318839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.270 qpair failed and we were unable to recover it. 00:34:56.270 [2024-11-25 14:33:01.319211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.270 [2024-11-25 14:33:01.319240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.270 qpair failed and we were unable to recover it. 00:34:56.270 [2024-11-25 14:33:01.319615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.270 [2024-11-25 14:33:01.319641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.270 qpair failed and we were unable to recover it. 00:34:56.270 [2024-11-25 14:33:01.320008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.270 [2024-11-25 14:33:01.320034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.270 qpair failed and we were unable to recover it. 00:34:56.270 [2024-11-25 14:33:01.320379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.544 [2024-11-25 14:33:01.320405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.544 qpair failed and we were unable to recover it. 00:34:56.544 [2024-11-25 14:33:01.320770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.545 [2024-11-25 14:33:01.320799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.545 qpair failed and we were unable to recover it. 00:34:56.545 [2024-11-25 14:33:01.321171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.545 [2024-11-25 14:33:01.321207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.545 qpair failed and we were unable to recover it. 00:34:56.545 [2024-11-25 14:33:01.321574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.545 [2024-11-25 14:33:01.321601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.545 qpair failed and we were unable to recover it. 00:34:56.545 [2024-11-25 14:33:01.321932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.545 [2024-11-25 14:33:01.321958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.545 qpair failed and we were unable to recover it. 00:34:56.545 [2024-11-25 14:33:01.322179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.545 [2024-11-25 14:33:01.322209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.545 qpair failed and we were unable to recover it. 00:34:56.545 [2024-11-25 14:33:01.322535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.545 [2024-11-25 14:33:01.322561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.545 qpair failed and we were unable to recover it. 00:34:56.545 [2024-11-25 14:33:01.322933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.545 [2024-11-25 14:33:01.322960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.545 qpair failed and we were unable to recover it. 00:34:56.545 [2024-11-25 14:33:01.323339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.545 [2024-11-25 14:33:01.323368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.545 qpair failed and we were unable to recover it. 00:34:56.545 [2024-11-25 14:33:01.323672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.545 [2024-11-25 14:33:01.323699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.545 qpair failed and we were unable to recover it. 00:34:56.545 [2024-11-25 14:33:01.324066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.545 [2024-11-25 14:33:01.324098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.545 qpair failed and we were unable to recover it. 00:34:56.545 [2024-11-25 14:33:01.324447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.545 [2024-11-25 14:33:01.324479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.545 qpair failed and we were unable to recover it. 00:34:56.545 [2024-11-25 14:33:01.324691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.545 [2024-11-25 14:33:01.324724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.545 qpair failed and we were unable to recover it. 00:34:56.545 [2024-11-25 14:33:01.325121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.545 [2024-11-25 14:33:01.325151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.545 qpair failed and we were unable to recover it. 00:34:56.545 [2024-11-25 14:33:01.325518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.545 [2024-11-25 14:33:01.325548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.545 qpair failed and we were unable to recover it. 00:34:56.545 [2024-11-25 14:33:01.325951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.545 [2024-11-25 14:33:01.325981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.545 qpair failed and we were unable to recover it. 00:34:56.545 [2024-11-25 14:33:01.326355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.545 [2024-11-25 14:33:01.326387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.545 qpair failed and we were unable to recover it. 00:34:56.545 [2024-11-25 14:33:01.326745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.545 [2024-11-25 14:33:01.326774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.545 qpair failed and we were unable to recover it. 00:34:56.545 [2024-11-25 14:33:01.327137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.545 [2024-11-25 14:33:01.327178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.545 qpair failed and we were unable to recover it. 00:34:56.545 [2024-11-25 14:33:01.327528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.545 [2024-11-25 14:33:01.327558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.545 qpair failed and we were unable to recover it. 00:34:56.545 [2024-11-25 14:33:01.327967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.545 [2024-11-25 14:33:01.327997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.545 qpair failed and we were unable to recover it. 00:34:56.545 [2024-11-25 14:33:01.328238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.545 [2024-11-25 14:33:01.328271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.545 qpair failed and we were unable to recover it. 00:34:56.545 [2024-11-25 14:33:01.328619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.545 [2024-11-25 14:33:01.328649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.545 qpair failed and we were unable to recover it. 00:34:56.545 [2024-11-25 14:33:01.329002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.545 [2024-11-25 14:33:01.329031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.545 qpair failed and we were unable to recover it. 00:34:56.545 [2024-11-25 14:33:01.329332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.545 [2024-11-25 14:33:01.329363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.545 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.329624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.329652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.330009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.330038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.330383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.330413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.330773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.330802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.331176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.331207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.331577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.331606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.331973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.332002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.332349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.332380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.332749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.332778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.333214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.333245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.333582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.333611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.333914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.333942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.334302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.334332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.334697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.334727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.335085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.335114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.335483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.335512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.335875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.335904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.336275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.336312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.336652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.336681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.337055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.337083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.337494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.337525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.337770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.337802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.338183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.338214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.338570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.338599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.338956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.338985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.339437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.339467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.339823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.339852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.340220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.340251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.340597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.340626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.546 qpair failed and we were unable to recover it. 00:34:56.546 [2024-11-25 14:33:01.340968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.546 [2024-11-25 14:33:01.340997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.341369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.341399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.341760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.341791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.342174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.342204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.342548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.342577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.342978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.343007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.343386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.343418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.343681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.343713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.344082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.344112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.344480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.344512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.344865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.344894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.345262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.345294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.345653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.345682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.345941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.345970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.346340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.346370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.346720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.346751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.347110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.347139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.347501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.347531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.347769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.347801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.348143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.348189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.348442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.348474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.348915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.348946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.349319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.349350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.349681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.349710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.350070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.350099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.350528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.350559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.350915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.350946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.351311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.351343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.351712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.547 [2024-11-25 14:33:01.351749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.547 qpair failed and we were unable to recover it. 00:34:56.547 [2024-11-25 14:33:01.351981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.352014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.352382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.352413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.352780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.352809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.353174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.353204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.353569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.353598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.353896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.353926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.354302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.354333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.354707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.354737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.355111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.355140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.355528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.355559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.355991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.356021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.356278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.356310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.356655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.356686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.357050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.357081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.357423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.357455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.357814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.357845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.358229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.358261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.358613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.358643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.359006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.359036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.359417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.359448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.359823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.359852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.360108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.360137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.360535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.360565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.360921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.360951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.361294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.361325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.361580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.361611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.361953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.361983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.362347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.548 [2024-11-25 14:33:01.362378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.548 qpair failed and we were unable to recover it. 00:34:56.548 [2024-11-25 14:33:01.362815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.362844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.363084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.363115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.363467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.363497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.363868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.363899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.364256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.364288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.364645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.364676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.365048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.365078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.365434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.365466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.365686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.365715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.366088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.366118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.366537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.366569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.366924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.366961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.367214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.367246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.367673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.367703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.368065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.368096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.368492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.368523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.368885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.368915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.369265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.369295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.369642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.369671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.370107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.370137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.370564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.370595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.370934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.370963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.371311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.371342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.371690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.371720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.371991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.372020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.372287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.372318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.372661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.372691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.373037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.373067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.373441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.549 [2024-11-25 14:33:01.373472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.549 qpair failed and we were unable to recover it. 00:34:56.549 [2024-11-25 14:33:01.373827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.373857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.374283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.374313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.374593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.374622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.374868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.374897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.375334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.375365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.375727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.375756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.376058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.376087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.376523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.376554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.376917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.376947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.377295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.377327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.377668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.377697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.378119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.378151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.378569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.378602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.378951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.378982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.379363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.379397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.379764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.379796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.380156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.380200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.380657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.380688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.381035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.381067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.381415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.381447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.381841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.381872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.382241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.382273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.382642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.382683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.383032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.383062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.383301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.383335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.383714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.383745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.384194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.384226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.550 qpair failed and we were unable to recover it. 00:34:56.550 [2024-11-25 14:33:01.384611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.550 [2024-11-25 14:33:01.384641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.551 qpair failed and we were unable to recover it. 00:34:56.551 [2024-11-25 14:33:01.384997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.551 [2024-11-25 14:33:01.385027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.551 qpair failed and we were unable to recover it. 00:34:56.551 [2024-11-25 14:33:01.385298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.551 [2024-11-25 14:33:01.385332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.551 qpair failed and we were unable to recover it. 00:34:56.551 [2024-11-25 14:33:01.385702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.551 [2024-11-25 14:33:01.385732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.551 qpair failed and we were unable to recover it. 00:34:56.551 [2024-11-25 14:33:01.386056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.551 [2024-11-25 14:33:01.386086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.551 qpair failed and we were unable to recover it. 00:34:56.551 [2024-11-25 14:33:01.386444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.551 [2024-11-25 14:33:01.386476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.551 qpair failed and we were unable to recover it. 00:34:56.551 [2024-11-25 14:33:01.386854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.551 [2024-11-25 14:33:01.386887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.551 qpair failed and we were unable to recover it. 00:34:56.551 [2024-11-25 14:33:01.387237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.551 [2024-11-25 14:33:01.387273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.551 qpair failed and we were unable to recover it. 00:34:56.551 [2024-11-25 14:33:01.387494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.551 [2024-11-25 14:33:01.387527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.551 qpair failed and we were unable to recover it. 00:34:56.551 [2024-11-25 14:33:01.387779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.551 [2024-11-25 14:33:01.387815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.551 qpair failed and we were unable to recover it. 00:34:56.551 [2024-11-25 14:33:01.388199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.551 [2024-11-25 14:33:01.388232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.551 qpair failed and we were unable to recover it. 00:34:56.551 [2024-11-25 14:33:01.388581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.551 [2024-11-25 14:33:01.388610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.551 qpair failed and we were unable to recover it. 00:34:56.551 [2024-11-25 14:33:01.389031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.551 [2024-11-25 14:33:01.389062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.551 qpair failed and we were unable to recover it. 00:34:56.551 [2024-11-25 14:33:01.389412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.551 [2024-11-25 14:33:01.389444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.551 qpair failed and we were unable to recover it. 00:34:56.551 [2024-11-25 14:33:01.389807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.551 [2024-11-25 14:33:01.389837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.551 qpair failed and we were unable to recover it. 00:34:56.551 [2024-11-25 14:33:01.390200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.551 [2024-11-25 14:33:01.390231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.551 qpair failed and we were unable to recover it. 00:34:56.551 [2024-11-25 14:33:01.390618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.551 [2024-11-25 14:33:01.390648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.551 qpair failed and we were unable to recover it. 00:34:56.551 [2024-11-25 14:33:01.391014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.551 [2024-11-25 14:33:01.391044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.551 qpair failed and we were unable to recover it. 00:34:56.551 [2024-11-25 14:33:01.391302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.551 [2024-11-25 14:33:01.391333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.551 qpair failed and we were unable to recover it. 00:34:56.551 [2024-11-25 14:33:01.391711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.551 [2024-11-25 14:33:01.391741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.551 qpair failed and we were unable to recover it. 00:34:56.551 [2024-11-25 14:33:01.392105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.551 [2024-11-25 14:33:01.392135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.551 qpair failed and we were unable to recover it. 00:34:56.551 [2024-11-25 14:33:01.392509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.551 [2024-11-25 14:33:01.392542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.551 qpair failed and we were unable to recover it. 00:34:56.551 [2024-11-25 14:33:01.392904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.551 [2024-11-25 14:33:01.392933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.551 qpair failed and we were unable to recover it. 00:34:56.551 [2024-11-25 14:33:01.393287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.551 [2024-11-25 14:33:01.393319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.551 qpair failed and we were unable to recover it. 00:34:56.551 [2024-11-25 14:33:01.393704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.551 [2024-11-25 14:33:01.393733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.551 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.394107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.394136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.394510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.394540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.394893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.394924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.395177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.395209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.395581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.395611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.395958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.395988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.396351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.396382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.396789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.396818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.397258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.397290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.397663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.397693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.398061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.398096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.398501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.398531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.398895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.398924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.399194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.399225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.399623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.399654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.399931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.399962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.400319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.400350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.400716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.400746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.401107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.401137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.401527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.401558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.401799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.401829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.402204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.402238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.402628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.402661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.403006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.403035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.403390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.403420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.403781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.403811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.404180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.404211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.404591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.404619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.404980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.405010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.405400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.552 [2024-11-25 14:33:01.405431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.552 qpair failed and we were unable to recover it. 00:34:56.552 [2024-11-25 14:33:01.405737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.405766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.406131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.406169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.406520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.406551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.406890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.406920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.407283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.407315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.407683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.407713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.408083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.408113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.408463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.408494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.408853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.408883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.409247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.409278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.409636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.409665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.409920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.409951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.410317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.410347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.410719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.410749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.411130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.411168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.411523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.411553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.411932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.411962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.412227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.412258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.412632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.412661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.413026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.413055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.413307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.413347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.413706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.413736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.413986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.414017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.414284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.414315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.414685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.414714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.415060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.415090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.415439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.415470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.415879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.415910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.416346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.553 [2024-11-25 14:33:01.416377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.553 qpair failed and we were unable to recover it. 00:34:56.553 [2024-11-25 14:33:01.416739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.416768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.417136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.417176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.417433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.417463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.417808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.417838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.418200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.418232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.418618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.418649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.419001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.419032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.419365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.419395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.419739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.419769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.420151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.420203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.420610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.420639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.420987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.421016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.421279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.421310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.421449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.421479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.421817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.421846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.422201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.422231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.422601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.422631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.422989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.423019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.423380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.423412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.423773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.423803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.424168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.424199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.424562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.424590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.424957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.424987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.425348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.425378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.425761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.425791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.426157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.426195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.426587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.426616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.426847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.426876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.427238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.427270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.427635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.427666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.428030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.428059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.428312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.428349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.428694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.554 [2024-11-25 14:33:01.428724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.554 qpair failed and we were unable to recover it. 00:34:56.554 [2024-11-25 14:33:01.429088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.429119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.429500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.429530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.429909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.429938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.430307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.430337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.430682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.430711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.431056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.431086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.431455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.431486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.431857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.431887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.432257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.432287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.432639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.432668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.433050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.433079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.433418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.433448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.433815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.433845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.434124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.434155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.434544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.434574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.434936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.434966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.435343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.435374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.435758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.435788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.436032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.436064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.436436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.436467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.436823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.436853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.437227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.437257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.437630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.437659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.438031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.438061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.438404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.438435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.438795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.438826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.439184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.439214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.439598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.439627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.439995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.555 [2024-11-25 14:33:01.440026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.555 qpair failed and we were unable to recover it. 00:34:56.555 [2024-11-25 14:33:01.440471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.440503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.556 [2024-11-25 14:33:01.440841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.440872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.556 [2024-11-25 14:33:01.441234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.441266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.556 [2024-11-25 14:33:01.441641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.441671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.556 [2024-11-25 14:33:01.442032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.442061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.556 [2024-11-25 14:33:01.442521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.442552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.556 [2024-11-25 14:33:01.442895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.442924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.556 [2024-11-25 14:33:01.443196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.443226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.556 [2024-11-25 14:33:01.443600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.443631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.556 [2024-11-25 14:33:01.443991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.444027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.556 [2024-11-25 14:33:01.444278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.444310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.556 [2024-11-25 14:33:01.444667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.444696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.556 [2024-11-25 14:33:01.445007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.445037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.556 [2024-11-25 14:33:01.445394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.445424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.556 [2024-11-25 14:33:01.445780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.445810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.556 [2024-11-25 14:33:01.446173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.446206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.556 [2024-11-25 14:33:01.446453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.446484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.556 [2024-11-25 14:33:01.446732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.446761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.556 [2024-11-25 14:33:01.447200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.447231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.556 [2024-11-25 14:33:01.447586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.447616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.556 [2024-11-25 14:33:01.447986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.448015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.556 [2024-11-25 14:33:01.448303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.448334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.556 [2024-11-25 14:33:01.448681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.448710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.556 [2024-11-25 14:33:01.449082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.449112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.556 [2024-11-25 14:33:01.449454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.449485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.556 [2024-11-25 14:33:01.449838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.449867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.556 [2024-11-25 14:33:01.450250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.556 [2024-11-25 14:33:01.450281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.556 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.450661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.450691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.450929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.450962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.451348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.451378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.451737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.451766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.452013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.452046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.452481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.452512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.452854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.452883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.453200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.453230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.453605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.453634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.453996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.454027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.454369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.454400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.454661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.454690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.455045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.455074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.455332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.455363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.455718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.455747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.456118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.456148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.456494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.456524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.456764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.456793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.457148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.457187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.457545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.457576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.457823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.457856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.458200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.458231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.458584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.458626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.458966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.458997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.459350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.459380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.459776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.459806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.460156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.460194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.460573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.460603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.557 qpair failed and we were unable to recover it. 00:34:56.557 [2024-11-25 14:33:01.460973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.557 [2024-11-25 14:33:01.461003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.461350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.461386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.461750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.461780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.462128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.462157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.462511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.462540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.462938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.462968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.463313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.463344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.463711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.463742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.464110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.464141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.464572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.464603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.464939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.464969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.465332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.465363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.465749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.465778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.466139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.466190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.466570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.466600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.466960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.466990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.467417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.467447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.467803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.467834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.468200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.468231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.468631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.468659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.469015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.469044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.469434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.469465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.469820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.469850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.470226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.470256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.470476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.470508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.470870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.470899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.471264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.471296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.471554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.471583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.471882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.471911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.472256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.472286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.558 qpair failed and we were unable to recover it. 00:34:56.558 [2024-11-25 14:33:01.472630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.558 [2024-11-25 14:33:01.472659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.473022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.473050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.473396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.473426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.473786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.473814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.474187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.474224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.474573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.474604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.474954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.474983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.475346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.475377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.475743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.475773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.476081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.476110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.476547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.476579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.476957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.476986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.477377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.477407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.477783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.477812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.477979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.478009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.478359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.478388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.478683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.478712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.479083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.479113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.479585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.479617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.479855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.479884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.480170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.480202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.480558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.480587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.480958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.480988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.481240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.481273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.481659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.481690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.482053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.482082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.482358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.482390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.482723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.482752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.483100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.483130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.483508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.483539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.559 qpair failed and we were unable to recover it. 00:34:56.559 [2024-11-25 14:33:01.483895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.559 [2024-11-25 14:33:01.483924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.560 qpair failed and we were unable to recover it. 00:34:56.560 [2024-11-25 14:33:01.484283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.560 [2024-11-25 14:33:01.484314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.560 qpair failed and we were unable to recover it. 00:34:56.560 [2024-11-25 14:33:01.484712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.560 [2024-11-25 14:33:01.484740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.560 qpair failed and we were unable to recover it. 00:34:56.560 [2024-11-25 14:33:01.485114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.560 [2024-11-25 14:33:01.485143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.560 qpair failed and we were unable to recover it. 00:34:56.560 [2024-11-25 14:33:01.485413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.560 [2024-11-25 14:33:01.485447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.560 qpair failed and we were unable to recover it. 00:34:56.560 [2024-11-25 14:33:01.485813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.560 [2024-11-25 14:33:01.485844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.560 qpair failed and we were unable to recover it. 00:34:56.560 [2024-11-25 14:33:01.486119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.560 [2024-11-25 14:33:01.486149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.560 qpair failed and we were unable to recover it. 00:34:56.560 [2024-11-25 14:33:01.486427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.560 [2024-11-25 14:33:01.486457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.560 qpair failed and we were unable to recover it. 00:34:56.560 [2024-11-25 14:33:01.486693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.560 [2024-11-25 14:33:01.486726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.560 qpair failed and we were unable to recover it. 00:34:56.560 [2024-11-25 14:33:01.487108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.560 [2024-11-25 14:33:01.487138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.560 qpair failed and we were unable to recover it. 00:34:56.560 [2024-11-25 14:33:01.487510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.560 [2024-11-25 14:33:01.487541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.560 qpair failed and we were unable to recover it. 00:34:56.560 [2024-11-25 14:33:01.487879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.560 [2024-11-25 14:33:01.487910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.560 qpair failed and we were unable to recover it. 00:34:56.560 [2024-11-25 14:33:01.488327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.560 [2024-11-25 14:33:01.488359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.560 qpair failed and we were unable to recover it. 00:34:56.560 [2024-11-25 14:33:01.488717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.560 [2024-11-25 14:33:01.488747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.560 qpair failed and we were unable to recover it. 00:34:56.560 [2024-11-25 14:33:01.489123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.560 [2024-11-25 14:33:01.489165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.560 qpair failed and we were unable to recover it. 00:34:56.560 [2024-11-25 14:33:01.489526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.560 [2024-11-25 14:33:01.489557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.560 qpair failed and we were unable to recover it. 00:34:56.560 [2024-11-25 14:33:01.489910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.560 [2024-11-25 14:33:01.489941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.560 qpair failed and we were unable to recover it. 00:34:56.560 [2024-11-25 14:33:01.490285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.560 [2024-11-25 14:33:01.490317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.560 qpair failed and we were unable to recover it. 00:34:56.560 [2024-11-25 14:33:01.490693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.560 [2024-11-25 14:33:01.490724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.560 qpair failed and we were unable to recover it. 00:34:56.560 [2024-11-25 14:33:01.490964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.560 [2024-11-25 14:33:01.490993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.560 qpair failed and we were unable to recover it. 00:34:56.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3624409 Killed "${NVMF_APP[@]}" "$@" 00:34:56.560 [2024-11-25 14:33:01.491345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.560 [2024-11-25 14:33:01.491375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.560 qpair failed and we were unable to recover it. 00:34:56.560 [2024-11-25 14:33:01.491747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.560 [2024-11-25 14:33:01.491776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.560 qpair failed and we were unable to recover it. 00:34:56.560 14:33:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:56.560 [2024-11-25 14:33:01.492144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.560 [2024-11-25 14:33:01.492188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.560 qpair failed and we were unable to recover it. 00:34:56.560 14:33:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:56.560 [2024-11-25 14:33:01.492571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.560 [2024-11-25 14:33:01.492602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.560 qpair failed and we were unable to recover it. 00:34:56.560 14:33:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:56.560 [2024-11-25 14:33:01.492937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.560 [2024-11-25 14:33:01.492966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.560 14:33:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:56.561 qpair failed and we were unable to recover it. 00:34:56.561 14:33:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:56.561 [2024-11-25 14:33:01.493405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.561 [2024-11-25 14:33:01.493436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.561 qpair failed and we were unable to recover it. 00:34:56.561 [2024-11-25 14:33:01.493780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.561 [2024-11-25 14:33:01.493809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.561 qpair failed and we were unable to recover it. 00:34:56.561 [2024-11-25 14:33:01.494147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.561 [2024-11-25 14:33:01.494187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.561 qpair failed and we were unable to recover it. 00:34:56.561 [2024-11-25 14:33:01.494540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.561 [2024-11-25 14:33:01.494570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.561 qpair failed and we were unable to recover it. 00:34:56.561 [2024-11-25 14:33:01.494924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.561 [2024-11-25 14:33:01.494956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.561 qpair failed and we were unable to recover it. 00:34:56.561 [2024-11-25 14:33:01.495305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.561 [2024-11-25 14:33:01.495336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.561 qpair failed and we were unable to recover it. 00:34:56.561 [2024-11-25 14:33:01.495603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.561 [2024-11-25 14:33:01.495633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.561 qpair failed and we were unable to recover it. 00:34:56.561 [2024-11-25 14:33:01.495999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.561 [2024-11-25 14:33:01.496027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.561 qpair failed and we were unable to recover it. 00:34:56.561 [2024-11-25 14:33:01.496387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.561 [2024-11-25 14:33:01.496419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.561 qpair failed and we were unable to recover it. 00:34:56.561 [2024-11-25 14:33:01.496787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.561 [2024-11-25 14:33:01.496817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.561 qpair failed and we were unable to recover it. 00:34:56.561 [2024-11-25 14:33:01.497193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.561 [2024-11-25 14:33:01.497226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.561 qpair failed and we were unable to recover it. 00:34:56.561 [2024-11-25 14:33:01.497575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.561 [2024-11-25 14:33:01.497604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.561 qpair failed and we were unable to recover it. 00:34:56.561 [2024-11-25 14:33:01.497962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.561 [2024-11-25 14:33:01.497993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.561 qpair failed and we were unable to recover it. 00:34:56.561 [2024-11-25 14:33:01.498145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.561 [2024-11-25 14:33:01.498188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.561 qpair failed and we were unable to recover it. 00:34:56.561 [2024-11-25 14:33:01.498581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.561 [2024-11-25 14:33:01.498614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.561 qpair failed and we were unable to recover it. 00:34:56.561 [2024-11-25 14:33:01.498998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.561 [2024-11-25 14:33:01.499028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.561 qpair failed and we were unable to recover it. 00:34:56.561 [2024-11-25 14:33:01.499444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.561 [2024-11-25 14:33:01.499475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.561 qpair failed and we were unable to recover it. 00:34:56.561 [2024-11-25 14:33:01.499842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.561 [2024-11-25 14:33:01.499872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.561 qpair failed and we were unable to recover it. 00:34:56.561 [2024-11-25 14:33:01.500240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.561 [2024-11-25 14:33:01.500271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.561 qpair failed and we were unable to recover it. 00:34:56.561 [2024-11-25 14:33:01.500537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.561 [2024-11-25 14:33:01.500566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.561 qpair failed and we were unable to recover it. 00:34:56.561 [2024-11-25 14:33:01.500923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.561 [2024-11-25 14:33:01.500952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.561 qpair failed and we were unable to recover it. 00:34:56.561 [2024-11-25 14:33:01.501232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.561 [2024-11-25 14:33:01.501263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.561 qpair failed and we were unable to recover it. 00:34:56.561 [2024-11-25 14:33:01.501614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.561 [2024-11-25 14:33:01.501643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.561 qpair failed and we were unable to recover it. 00:34:56.561 [2024-11-25 14:33:01.502005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.561 [2024-11-25 14:33:01.502035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.561 qpair failed and we were unable to recover it. 00:34:56.561 14:33:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3625342 00:34:56.561 [2024-11-25 14:33:01.502285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.561 [2024-11-25 14:33:01.502319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.561 qpair failed and we were unable to recover it. 00:34:56.562 14:33:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3625342 00:34:56.562 [2024-11-25 14:33:01.502677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.562 [2024-11-25 14:33:01.502710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.562 qpair failed and we were unable to recover it. 00:34:56.562 14:33:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:56.562 14:33:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3625342 ']' 00:34:56.562 [2024-11-25 14:33:01.503068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.562 [2024-11-25 14:33:01.503101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.562 14:33:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:56.562 qpair failed and we were unable to recover it. 00:34:56.562 14:33:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:56.562 [2024-11-25 14:33:01.503469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.562 [2024-11-25 14:33:01.503502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.562 qpair failed and we were unable to recover it. 00:34:56.562 14:33:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:56.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:56.562 14:33:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:56.562 [2024-11-25 14:33:01.503866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.562 [2024-11-25 14:33:01.503899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.562 qpair failed and we were unable to recover it. 00:34:56.562 14:33:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:56.562 [2024-11-25 14:33:01.504142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.562 [2024-11-25 14:33:01.504184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.562 qpair failed and we were unable to recover it. 00:34:56.562 [2024-11-25 14:33:01.504564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.562 [2024-11-25 14:33:01.504599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.562 qpair failed and we were unable to recover it. 00:34:56.562 [2024-11-25 14:33:01.504952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.562 [2024-11-25 14:33:01.504982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.562 qpair failed and we were unable to recover it. 00:34:56.562 [2024-11-25 14:33:01.505346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.562 [2024-11-25 14:33:01.505378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.562 qpair failed and we were unable to recover it. 00:34:56.562 [2024-11-25 14:33:01.505709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.562 [2024-11-25 14:33:01.505739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.562 qpair failed and we were unable to recover it. 00:34:56.562 [2024-11-25 14:33:01.506102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.562 [2024-11-25 14:33:01.506133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.562 qpair failed and we were unable to recover it. 00:34:56.562 [2024-11-25 14:33:01.506527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.562 [2024-11-25 14:33:01.506567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.562 qpair failed and we were unable to recover it. 00:34:56.562 [2024-11-25 14:33:01.506927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.562 [2024-11-25 14:33:01.506959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.562 qpair failed and we were unable to recover it. 00:34:56.562 [2024-11-25 14:33:01.507344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.562 [2024-11-25 14:33:01.507376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.562 qpair failed and we were unable to recover it. 00:34:56.562 [2024-11-25 14:33:01.507743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.562 [2024-11-25 14:33:01.507774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.562 qpair failed and we were unable to recover it. 00:34:56.562 [2024-11-25 14:33:01.508195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.562 [2024-11-25 14:33:01.508229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.562 qpair failed and we were unable to recover it. 00:34:56.562 [2024-11-25 14:33:01.508494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.562 [2024-11-25 14:33:01.508526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.562 qpair failed and we were unable to recover it. 00:34:56.562 [2024-11-25 14:33:01.508880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.562 [2024-11-25 14:33:01.508910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.562 qpair failed and we were unable to recover it. 00:34:56.562 [2024-11-25 14:33:01.509287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.562 [2024-11-25 14:33:01.509319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.562 qpair failed and we were unable to recover it. 00:34:56.562 [2024-11-25 14:33:01.509680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.562 [2024-11-25 14:33:01.509710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.562 qpair failed and we were unable to recover it. 00:34:56.562 [2024-11-25 14:33:01.510071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.562 [2024-11-25 14:33:01.510102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.562 qpair failed and we were unable to recover it. 00:34:56.562 [2024-11-25 14:33:01.510471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.562 [2024-11-25 14:33:01.510503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.562 qpair failed and we were unable to recover it. 00:34:56.562 [2024-11-25 14:33:01.510868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.562 [2024-11-25 14:33:01.510899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.563 qpair failed and we were unable to recover it. 00:34:56.563 [2024-11-25 14:33:01.511193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.563 [2024-11-25 14:33:01.511225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.563 qpair failed and we were unable to recover it. 00:34:56.563 [2024-11-25 14:33:01.511609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.563 [2024-11-25 14:33:01.511639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.563 qpair failed and we were unable to recover it. 00:34:56.563 [2024-11-25 14:33:01.511996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.563 [2024-11-25 14:33:01.512027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.563 qpair failed and we were unable to recover it. 00:34:56.563 [2024-11-25 14:33:01.512396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.563 [2024-11-25 14:33:01.512427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.563 qpair failed and we were unable to recover it. 00:34:56.563 [2024-11-25 14:33:01.512844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.563 [2024-11-25 14:33:01.512875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.563 qpair failed and we were unable to recover it. 00:34:56.563 [2024-11-25 14:33:01.513019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.563 [2024-11-25 14:33:01.513049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.563 qpair failed and we were unable to recover it. 00:34:56.563 [2024-11-25 14:33:01.513490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.563 [2024-11-25 14:33:01.513523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.563 qpair failed and we were unable to recover it. 00:34:56.563 [2024-11-25 14:33:01.513937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.563 [2024-11-25 14:33:01.513967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.563 qpair failed and we were unable to recover it. 00:34:56.563 [2024-11-25 14:33:01.514195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.563 [2024-11-25 14:33:01.514225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.563 qpair failed and we were unable to recover it. 00:34:56.563 [2024-11-25 14:33:01.514571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.563 [2024-11-25 14:33:01.514600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.563 qpair failed and we were unable to recover it. 00:34:56.563 [2024-11-25 14:33:01.514962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.563 [2024-11-25 14:33:01.514992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.563 qpair failed and we were unable to recover it. 00:34:56.563 [2024-11-25 14:33:01.515437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.563 [2024-11-25 14:33:01.515470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.563 qpair failed and we were unable to recover it. 00:34:56.563 [2024-11-25 14:33:01.515807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.563 [2024-11-25 14:33:01.515836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.563 qpair failed and we were unable to recover it. 00:34:56.563 [2024-11-25 14:33:01.516203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.563 [2024-11-25 14:33:01.516236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.563 qpair failed and we were unable to recover it. 00:34:56.563 [2024-11-25 14:33:01.516621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.563 [2024-11-25 14:33:01.516650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.563 qpair failed and we were unable to recover it. 00:34:56.563 [2024-11-25 14:33:01.517021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.563 [2024-11-25 14:33:01.517053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.563 qpair failed and we were unable to recover it. 00:34:56.563 [2024-11-25 14:33:01.517397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.563 [2024-11-25 14:33:01.517432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.563 qpair failed and we were unable to recover it. 00:34:56.563 [2024-11-25 14:33:01.517797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.563 [2024-11-25 14:33:01.517828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.563 qpair failed and we were unable to recover it. 00:34:56.563 [2024-11-25 14:33:01.518186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.563 [2024-11-25 14:33:01.518216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.563 qpair failed and we were unable to recover it. 00:34:56.563 [2024-11-25 14:33:01.518567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.563 [2024-11-25 14:33:01.518596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.563 qpair failed and we were unable to recover it. 00:34:56.563 [2024-11-25 14:33:01.518830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.563 [2024-11-25 14:33:01.518859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.563 qpair failed and we were unable to recover it. 00:34:56.563 [2024-11-25 14:33:01.519221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.563 [2024-11-25 14:33:01.519253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.563 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.519613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.519642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.520010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.520039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.520313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.520344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.520730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.520762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.521146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.521189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.521593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.521624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.521889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.521924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.522293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.522323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.522680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.522709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.523069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.523098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.523450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.523480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.523813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.523844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.524231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.524264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.524646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.524678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.525030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.525059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.525443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.525474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.525840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.525868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.526249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.526281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.526675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.526704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.527012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.527042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.527307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.527341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.527593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.527624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.527995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.528026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.528399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.528433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.528687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.528717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.529156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.529210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.529643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.529674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.530099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.530128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.530518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.530549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.564 [2024-11-25 14:33:01.530911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.564 [2024-11-25 14:33:01.530943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.564 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.531279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.531311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.533103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.533201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.533591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.533627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.534006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.534037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.534386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.534418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.534829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.534864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.535240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.535273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.536182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.536238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.536629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.536660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.537055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.537085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.537502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.537534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.537778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.537808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.538170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.538202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.538555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.538584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.538849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.538882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.539284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.539316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.539634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.539671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.540034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.540063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.540472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.540503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.540842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.540873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.541194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.541225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.541590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.541619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.541866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.541896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.542253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.542284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.542554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.542584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.542929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.542958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.543410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.565 [2024-11-25 14:33:01.543441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.565 qpair failed and we were unable to recover it. 00:34:56.565 [2024-11-25 14:33:01.543643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.566 [2024-11-25 14:33:01.543676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.566 qpair failed and we were unable to recover it. 00:34:56.566 [2024-11-25 14:33:01.543908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.566 [2024-11-25 14:33:01.543939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.566 qpair failed and we were unable to recover it. 00:34:56.566 [2024-11-25 14:33:01.544174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.566 [2024-11-25 14:33:01.544206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.566 qpair failed and we were unable to recover it. 00:34:56.566 [2024-11-25 14:33:01.544595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.566 [2024-11-25 14:33:01.544625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.566 qpair failed and we were unable to recover it. 00:34:56.566 [2024-11-25 14:33:01.544994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.566 [2024-11-25 14:33:01.545023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.566 qpair failed and we were unable to recover it. 00:34:56.566 [2024-11-25 14:33:01.545397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.566 [2024-11-25 14:33:01.545429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.566 qpair failed and we were unable to recover it. 00:34:56.566 [2024-11-25 14:33:01.545842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.566 [2024-11-25 14:33:01.545871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.566 qpair failed and we were unable to recover it. 00:34:56.566 [2024-11-25 14:33:01.546238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.566 [2024-11-25 14:33:01.546269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.566 qpair failed and we were unable to recover it. 00:34:56.566 [2024-11-25 14:33:01.546604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.566 [2024-11-25 14:33:01.546634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.566 qpair failed and we were unable to recover it. 00:34:56.566 [2024-11-25 14:33:01.546888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.566 [2024-11-25 14:33:01.546920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.566 qpair failed and we were unable to recover it. 00:34:56.566 [2024-11-25 14:33:01.547320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.566 [2024-11-25 14:33:01.547351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.566 qpair failed and we were unable to recover it. 00:34:56.566 [2024-11-25 14:33:01.547686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.566 [2024-11-25 14:33:01.547716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.566 qpair failed and we were unable to recover it. 00:34:56.566 [2024-11-25 14:33:01.547896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.566 [2024-11-25 14:33:01.547926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.566 qpair failed and we were unable to recover it. 00:34:56.566 [2024-11-25 14:33:01.548293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.566 [2024-11-25 14:33:01.548324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.566 qpair failed and we were unable to recover it. 00:34:56.566 [2024-11-25 14:33:01.548562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.566 [2024-11-25 14:33:01.548591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.566 qpair failed and we were unable to recover it. 00:34:56.566 [2024-11-25 14:33:01.548943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.566 [2024-11-25 14:33:01.548972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.566 qpair failed and we were unable to recover it. 00:34:56.566 [2024-11-25 14:33:01.549231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.566 [2024-11-25 14:33:01.549268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.566 qpair failed and we were unable to recover it. 00:34:56.566 [2024-11-25 14:33:01.549636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.566 [2024-11-25 14:33:01.549666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.566 qpair failed and we were unable to recover it. 00:34:56.566 [2024-11-25 14:33:01.550064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.566 [2024-11-25 14:33:01.550095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.566 qpair failed and we were unable to recover it. 00:34:56.566 [2024-11-25 14:33:01.550452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.566 [2024-11-25 14:33:01.550482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.566 qpair failed and we were unable to recover it. 00:34:56.566 [2024-11-25 14:33:01.550870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.566 [2024-11-25 14:33:01.550899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.566 qpair failed and we were unable to recover it. 00:34:56.566 [2024-11-25 14:33:01.551143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.566 [2024-11-25 14:33:01.551199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.566 qpair failed and we were unable to recover it. 00:34:56.566 [2024-11-25 14:33:01.551550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.566 [2024-11-25 14:33:01.551579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.566 qpair failed and we were unable to recover it. 00:34:56.566 [2024-11-25 14:33:01.551834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.566 [2024-11-25 14:33:01.551863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.566 qpair failed and we were unable to recover it. 00:34:56.566 [2024-11-25 14:33:01.552204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.566 [2024-11-25 14:33:01.552235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.566 qpair failed and we were unable to recover it. 00:34:56.567 [2024-11-25 14:33:01.552506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.567 [2024-11-25 14:33:01.552535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.567 qpair failed and we were unable to recover it. 00:34:56.567 [2024-11-25 14:33:01.552806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.567 [2024-11-25 14:33:01.552835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.567 qpair failed and we were unable to recover it. 00:34:56.567 [2024-11-25 14:33:01.552984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.567 [2024-11-25 14:33:01.553017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.567 qpair failed and we were unable to recover it. 00:34:56.567 [2024-11-25 14:33:01.553424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.567 [2024-11-25 14:33:01.553457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.567 qpair failed and we were unable to recover it. 00:34:56.567 [2024-11-25 14:33:01.553808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.567 [2024-11-25 14:33:01.553837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.567 qpair failed and we were unable to recover it. 00:34:56.567 [2024-11-25 14:33:01.554068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.567 [2024-11-25 14:33:01.554098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.567 qpair failed and we were unable to recover it. 00:34:56.567 [2024-11-25 14:33:01.554471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.567 [2024-11-25 14:33:01.554501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.567 qpair failed and we were unable to recover it. 00:34:56.567 [2024-11-25 14:33:01.554880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.567 [2024-11-25 14:33:01.554909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.567 qpair failed and we were unable to recover it. 00:34:56.567 [2024-11-25 14:33:01.555068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.567 [2024-11-25 14:33:01.555097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.567 qpair failed and we were unable to recover it. 00:34:56.567 [2024-11-25 14:33:01.555465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.567 [2024-11-25 14:33:01.555496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.567 qpair failed and we were unable to recover it. 00:34:56.567 [2024-11-25 14:33:01.555846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.567 [2024-11-25 14:33:01.555875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.567 qpair failed and we were unable to recover it. 00:34:56.567 [2024-11-25 14:33:01.556221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.567 [2024-11-25 14:33:01.556252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.567 qpair failed and we were unable to recover it. 00:34:56.567 [2024-11-25 14:33:01.556606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.567 [2024-11-25 14:33:01.556635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.567 qpair failed and we were unable to recover it. 00:34:56.567 [2024-11-25 14:33:01.557002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.567 [2024-11-25 14:33:01.557031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.567 qpair failed and we were unable to recover it. 00:34:56.567 [2024-11-25 14:33:01.557281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.567 [2024-11-25 14:33:01.557315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.567 qpair failed and we were unable to recover it. 00:34:56.567 [2024-11-25 14:33:01.557681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.567 [2024-11-25 14:33:01.557710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.567 qpair failed and we were unable to recover it. 00:34:56.567 [2024-11-25 14:33:01.558082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.567 [2024-11-25 14:33:01.558112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.567 qpair failed and we were unable to recover it. 00:34:56.567 [2024-11-25 14:33:01.558500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.567 [2024-11-25 14:33:01.558530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.567 qpair failed and we were unable to recover it. 00:34:56.567 [2024-11-25 14:33:01.558893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.567 [2024-11-25 14:33:01.558923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.567 qpair failed and we were unable to recover it. 00:34:56.567 [2024-11-25 14:33:01.559284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.567 [2024-11-25 14:33:01.559314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.567 qpair failed and we were unable to recover it. 00:34:56.567 [2024-11-25 14:33:01.559686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.567 [2024-11-25 14:33:01.559715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.567 qpair failed and we were unable to recover it. 00:34:56.567 [2024-11-25 14:33:01.559911] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:34:56.567 [2024-11-25 14:33:01.559981] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:56.567 [2024-11-25 14:33:01.560031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.567 [2024-11-25 14:33:01.560063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.567 qpair failed and we were unable to recover it. 00:34:56.567 [2024-11-25 14:33:01.560541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.567 [2024-11-25 14:33:01.560572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.567 qpair failed and we were unable to recover it. 00:34:56.567 [2024-11-25 14:33:01.560911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.567 [2024-11-25 14:33:01.560942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.567 qpair failed and we were unable to recover it. 00:34:56.567 [2024-11-25 14:33:01.561318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.567 [2024-11-25 14:33:01.561350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.567 qpair failed and we were unable to recover it. 00:34:56.567 [2024-11-25 14:33:01.561698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.567 [2024-11-25 14:33:01.561729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.567 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.561974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.562005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.562343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.562375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.562747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.562778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.563119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.563150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.563427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.563460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.563829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.563861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.564205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.564238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.564620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.564652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.565007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.565036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.565396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.565428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.565779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.565809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.566199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.566231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.566620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.566650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.566877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.566907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.567288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.567321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.567689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.567719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.568129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.568169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.568523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.568559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.568920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.568950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.569205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.569237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.569601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.569632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.569987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.570017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.570415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.570448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.570806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.570836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.571207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.571240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.571633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.571663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.572016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.572046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.572409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.572443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.572832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.572863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.573243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.573277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.573565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.568 [2024-11-25 14:33:01.573599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.568 qpair failed and we were unable to recover it. 00:34:56.568 [2024-11-25 14:33:01.573972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.569 [2024-11-25 14:33:01.574003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.569 qpair failed and we were unable to recover it. 00:34:56.569 [2024-11-25 14:33:01.574365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.569 [2024-11-25 14:33:01.574397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.569 qpair failed and we were unable to recover it. 00:34:56.569 [2024-11-25 14:33:01.574644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.569 [2024-11-25 14:33:01.574675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.569 qpair failed and we were unable to recover it. 00:34:56.569 [2024-11-25 14:33:01.575054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.569 [2024-11-25 14:33:01.575086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.569 qpair failed and we were unable to recover it. 00:34:56.569 [2024-11-25 14:33:01.575372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.569 [2024-11-25 14:33:01.575407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.569 qpair failed and we were unable to recover it. 00:34:56.569 [2024-11-25 14:33:01.575754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.569 [2024-11-25 14:33:01.575785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.569 qpair failed and we were unable to recover it. 00:34:56.569 [2024-11-25 14:33:01.576010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.569 [2024-11-25 14:33:01.576040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.569 qpair failed and we were unable to recover it. 00:34:56.569 [2024-11-25 14:33:01.576372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.569 [2024-11-25 14:33:01.576405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.569 qpair failed and we were unable to recover it. 00:34:56.569 [2024-11-25 14:33:01.576649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.569 [2024-11-25 14:33:01.576680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.569 qpair failed and we were unable to recover it. 00:34:56.569 [2024-11-25 14:33:01.576916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.569 [2024-11-25 14:33:01.576950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.569 qpair failed and we were unable to recover it. 00:34:56.569 [2024-11-25 14:33:01.577310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.569 [2024-11-25 14:33:01.577343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.569 qpair failed and we were unable to recover it. 00:34:56.569 [2024-11-25 14:33:01.577707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.569 [2024-11-25 14:33:01.577738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.569 qpair failed and we were unable to recover it. 00:34:56.569 [2024-11-25 14:33:01.577943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.569 [2024-11-25 14:33:01.577974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.569 qpair failed and we were unable to recover it. 00:34:56.569 [2024-11-25 14:33:01.578276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.569 [2024-11-25 14:33:01.578307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.569 qpair failed and we were unable to recover it. 00:34:56.569 [2024-11-25 14:33:01.578665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.569 [2024-11-25 14:33:01.578695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.569 qpair failed and we were unable to recover it. 00:34:56.569 [2024-11-25 14:33:01.579149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.569 [2024-11-25 14:33:01.579211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.569 qpair failed and we were unable to recover it. 00:34:56.569 [2024-11-25 14:33:01.579461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.569 [2024-11-25 14:33:01.579492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.569 qpair failed and we were unable to recover it. 00:34:56.569 [2024-11-25 14:33:01.579761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.569 [2024-11-25 14:33:01.579791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.569 qpair failed and we were unable to recover it. 00:34:56.569 [2024-11-25 14:33:01.580189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.569 [2024-11-25 14:33:01.580219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.569 qpair failed and we were unable to recover it. 00:34:56.569 [2024-11-25 14:33:01.580512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.569 [2024-11-25 14:33:01.580542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.569 qpair failed and we were unable to recover it. 00:34:56.569 [2024-11-25 14:33:01.580896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.569 [2024-11-25 14:33:01.580927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.569 qpair failed and we were unable to recover it. 00:34:56.569 [2024-11-25 14:33:01.581320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.569 [2024-11-25 14:33:01.581351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.569 qpair failed and we were unable to recover it. 00:34:56.569 [2024-11-25 14:33:01.581686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.569 [2024-11-25 14:33:01.581716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.569 qpair failed and we were unable to recover it. 00:34:56.569 [2024-11-25 14:33:01.581964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.569 [2024-11-25 14:33:01.581993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.569 qpair failed and we were unable to recover it. 00:34:56.569 [2024-11-25 14:33:01.582245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.569 [2024-11-25 14:33:01.582276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.569 qpair failed and we were unable to recover it. 00:34:56.569 [2024-11-25 14:33:01.582664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.569 [2024-11-25 14:33:01.582693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.569 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.583031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.583067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.583488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.583519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.583787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.583816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.584072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.584103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.584500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.584531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.584832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.584861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.585260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.585292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.585676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.585706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.586080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.586108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.586319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.586351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.586748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.586778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.587121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.587150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.587524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.587555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.587744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.587775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.588173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.588204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.588478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.588508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.588775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.588806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.589174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.589207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.589488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.589518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.589906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.589936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.590194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.590227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.590361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.590393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.590862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.590891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.591152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.591198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.591603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.591635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.592036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.592066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.592502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.592534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.592951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.592982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.593341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.593371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.593661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.570 [2024-11-25 14:33:01.593690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.570 qpair failed and we were unable to recover it. 00:34:56.570 [2024-11-25 14:33:01.594002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.594032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.594276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.594306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.594704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.594733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.595093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.595124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.595603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.595636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.595972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.596002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.596269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.596300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.596676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.596705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.596981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.597012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.597355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.597385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.597737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.597772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.598026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.598055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.598296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.598327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.598673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.598702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.599072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.599103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.599393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.599424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.599775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.599803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.600109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.600139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.600467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.600498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.600864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.600894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.601293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.601324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.601708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.601739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.601991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.602024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.602408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.602439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.602837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.602867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.603246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.603277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.603649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.603678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.604018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.571 [2024-11-25 14:33:01.604047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.571 qpair failed and we were unable to recover it. 00:34:56.571 [2024-11-25 14:33:01.604454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.604485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.604876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.604905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.605157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.605197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.605602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.605632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.606001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.606030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.606489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.606520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.606887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.606916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.607284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.607314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.607667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.607698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.608082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.608112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.608481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.608513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.608912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.608941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.609205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.609238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.609610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.609639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.610038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.610068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.610395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.610427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.610712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.610742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.611113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.611142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.611463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.611495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.611763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.611793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.612133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.612172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.612546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.612576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.612835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.612871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.613199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.613229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.613584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.613616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.613844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.613879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.614245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.614276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.614666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.614696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.615055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.572 [2024-11-25 14:33:01.615085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.572 qpair failed and we were unable to recover it. 00:34:56.572 [2024-11-25 14:33:01.615344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.573 [2024-11-25 14:33:01.615376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.573 qpair failed and we were unable to recover it. 00:34:56.573 [2024-11-25 14:33:01.615665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.573 [2024-11-25 14:33:01.615694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.573 qpair failed and we were unable to recover it. 00:34:56.573 [2024-11-25 14:33:01.616051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.573 [2024-11-25 14:33:01.616083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.573 qpair failed and we were unable to recover it. 00:34:56.573 [2024-11-25 14:33:01.616445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.573 [2024-11-25 14:33:01.616475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.573 qpair failed and we were unable to recover it. 00:34:56.573 [2024-11-25 14:33:01.616834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.573 [2024-11-25 14:33:01.616863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.573 qpair failed and we were unable to recover it. 00:34:56.573 [2024-11-25 14:33:01.617246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.573 [2024-11-25 14:33:01.617278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.573 qpair failed and we were unable to recover it. 00:34:56.573 [2024-11-25 14:33:01.617507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.573 [2024-11-25 14:33:01.617544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.573 qpair failed and we were unable to recover it. 00:34:56.573 [2024-11-25 14:33:01.617820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.573 [2024-11-25 14:33:01.617850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.573 qpair failed and we were unable to recover it. 00:34:56.573 [2024-11-25 14:33:01.618242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.573 [2024-11-25 14:33:01.618274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.573 qpair failed and we were unable to recover it. 00:34:56.573 [2024-11-25 14:33:01.618702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.573 [2024-11-25 14:33:01.618732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.573 qpair failed and we were unable to recover it. 00:34:56.573 [2024-11-25 14:33:01.619094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.573 [2024-11-25 14:33:01.619125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.573 qpair failed and we were unable to recover it. 00:34:56.573 [2024-11-25 14:33:01.619427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.573 [2024-11-25 14:33:01.619459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.573 qpair failed and we were unable to recover it. 00:34:56.573 [2024-11-25 14:33:01.619830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.573 [2024-11-25 14:33:01.619860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.573 qpair failed and we were unable to recover it. 00:34:56.573 [2024-11-25 14:33:01.620216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.573 [2024-11-25 14:33:01.620248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.573 qpair failed and we were unable to recover it. 00:34:56.573 [2024-11-25 14:33:01.620626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.573 [2024-11-25 14:33:01.620657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.573 qpair failed and we were unable to recover it. 00:34:56.573 [2024-11-25 14:33:01.620989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.573 [2024-11-25 14:33:01.621022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.573 qpair failed and we were unable to recover it. 00:34:56.573 [2024-11-25 14:33:01.621245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.573 [2024-11-25 14:33:01.621277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.573 qpair failed and we were unable to recover it. 00:34:56.573 [2024-11-25 14:33:01.621629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.848 [2024-11-25 14:33:01.621658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.848 qpair failed and we were unable to recover it. 00:34:56.848 [2024-11-25 14:33:01.621915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.848 [2024-11-25 14:33:01.621947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.848 qpair failed and we were unable to recover it. 00:34:56.848 [2024-11-25 14:33:01.622297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.848 [2024-11-25 14:33:01.622328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.848 qpair failed and we were unable to recover it. 00:34:56.848 [2024-11-25 14:33:01.622578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.848 [2024-11-25 14:33:01.622608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.848 qpair failed and we were unable to recover it. 00:34:56.848 [2024-11-25 14:33:01.622959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.848 [2024-11-25 14:33:01.622990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.848 qpair failed and we were unable to recover it. 00:34:56.848 [2024-11-25 14:33:01.623329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.848 [2024-11-25 14:33:01.623361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.848 qpair failed and we were unable to recover it. 00:34:56.848 [2024-11-25 14:33:01.623721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.848 [2024-11-25 14:33:01.623754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.848 qpair failed and we were unable to recover it. 00:34:56.848 [2024-11-25 14:33:01.624133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.848 [2024-11-25 14:33:01.624174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.848 qpair failed and we were unable to recover it. 00:34:56.848 [2024-11-25 14:33:01.624553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.848 [2024-11-25 14:33:01.624584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.848 qpair failed and we were unable to recover it. 00:34:56.848 [2024-11-25 14:33:01.624965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.848 [2024-11-25 14:33:01.624994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.848 qpair failed and we were unable to recover it. 00:34:56.848 [2024-11-25 14:33:01.625391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.848 [2024-11-25 14:33:01.625421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.848 qpair failed and we were unable to recover it. 00:34:56.848 [2024-11-25 14:33:01.625784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.848 [2024-11-25 14:33:01.625814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.848 qpair failed and we were unable to recover it. 00:34:56.848 [2024-11-25 14:33:01.626196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.848 [2024-11-25 14:33:01.626227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.848 qpair failed and we were unable to recover it. 00:34:56.848 [2024-11-25 14:33:01.626578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.848 [2024-11-25 14:33:01.626609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.848 qpair failed and we were unable to recover it. 00:34:56.848 [2024-11-25 14:33:01.626875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.848 [2024-11-25 14:33:01.626904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.848 qpair failed and we were unable to recover it. 00:34:56.848 [2024-11-25 14:33:01.627325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.627356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.627596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.627632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.627973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.628004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.628245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.628279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.628647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.628676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.628908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.628937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.629375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.629406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.629645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.629677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.630051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.630081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.630338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.630370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.630749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.630780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.631145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.631185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.631444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.631475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.631846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.631876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.632106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.632136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.632580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.632610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.632843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.632873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.633289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.633320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.633570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.633600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.633959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.633989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.634392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.634424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.634768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.634798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.635184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.635216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.635662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.635692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.636056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.636085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.636484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.636515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.636971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.637002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.637250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.637281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.849 [2024-11-25 14:33:01.637625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.849 [2024-11-25 14:33:01.637656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.849 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.638021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.638051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.638297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.638330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.638663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.638693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.638999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.639028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.639423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.639455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.639813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.639843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.640208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.640240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.640612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.640642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.640981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.641010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.641407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.641438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.641798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.641829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.642047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.642076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.642424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.642462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.642724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.642754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.643126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.643156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.643584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.643615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.643988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.644020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.644394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.644425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.644682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.644715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.645079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.645109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.645486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.645518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.645858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.645888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.646281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.646313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.646733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.646763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.647106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.647135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.647529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.647560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.647804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.647834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.648084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.648113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.648528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.648560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.648922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.850 [2024-11-25 14:33:01.648953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.850 qpair failed and we were unable to recover it. 00:34:56.850 [2024-11-25 14:33:01.649292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.649324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.649675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.649705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.650051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.650080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.650303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.650334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.650700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.650730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.650963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.650992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.651355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.651387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.651745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.651776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.652139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.652179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.652592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.652623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.652851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.652883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.653254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.653285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.653649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.653679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.654029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.654058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.654463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.654494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.654859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.654889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.655118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.655148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.655540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.655571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.655943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.655973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.656312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.656345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.656720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.656750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.657114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.657144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.657512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.657545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.657893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.657922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.658181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.658212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.658457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.658488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.658731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.658765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.851 [2024-11-25 14:33:01.659001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.851 [2024-11-25 14:33:01.659029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.851 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.659440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.659472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.659850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.659882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.660279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.660310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.660560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.660593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.661034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.661065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.661408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.661439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.661782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.661814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.662178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.662210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.662590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.662620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.662965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.662995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.663353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.663385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.663727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.663758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.664142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.664184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.664528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.664557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.664926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.664955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.665183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.665214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.665652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.665682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.666041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.666073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.666319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.666350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.666580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.666612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.667044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.667073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.667478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.667517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.667863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.667895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.668005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:56.852 [2024-11-25 14:33:01.668218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.668249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.668619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.668651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.668914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.668946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.669308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.669338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.669721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.852 [2024-11-25 14:33:01.669750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.852 qpair failed and we were unable to recover it. 00:34:56.852 [2024-11-25 14:33:01.670063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.670092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.670351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.670384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.670750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.670779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.671136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.671190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.671434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.671467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.671876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.671906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.672177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.672218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.672564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.672594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.672949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.672978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.673338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.673370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.673725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.673755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.674117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.674145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.674560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.674591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.674940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.674969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.675337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.675368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.675782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.675812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.676203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.676234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.676466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.676499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.676946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.676975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.677231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.677261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.677669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.677698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.678059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.678089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.678407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.678439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.678692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.678721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.679196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.679228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.679660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.679689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.680055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.680084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.680498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.680529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.680887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.680919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.681183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.681217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.681598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.681628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.681889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.681919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.853 [2024-11-25 14:33:01.682349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.853 [2024-11-25 14:33:01.682382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.853 qpair failed and we were unable to recover it. 00:34:56.854 [2024-11-25 14:33:01.682782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.854 [2024-11-25 14:33:01.682813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.854 qpair failed and we were unable to recover it. 00:34:56.854 [2024-11-25 14:33:01.683186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.854 [2024-11-25 14:33:01.683216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.854 qpair failed and we were unable to recover it. 00:34:56.854 [2024-11-25 14:33:01.683626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.854 [2024-11-25 14:33:01.683656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.854 qpair failed and we were unable to recover it. 00:34:56.854 [2024-11-25 14:33:01.684010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.854 [2024-11-25 14:33:01.684041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.854 qpair failed and we were unable to recover it. 00:34:56.854 [2024-11-25 14:33:01.684298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.854 [2024-11-25 14:33:01.684329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.854 qpair failed and we were unable to recover it. 00:34:56.854 [2024-11-25 14:33:01.684697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.854 [2024-11-25 14:33:01.684726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.854 qpair failed and we were unable to recover it. 00:34:56.854 [2024-11-25 14:33:01.685094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.854 [2024-11-25 14:33:01.685124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.854 qpair failed and we were unable to recover it. 00:34:56.854 [2024-11-25 14:33:01.685561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.854 [2024-11-25 14:33:01.685592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.854 qpair failed and we were unable to recover it. 00:34:56.854 [2024-11-25 14:33:01.685728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.854 [2024-11-25 14:33:01.685759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.854 qpair failed and we were unable to recover it. 00:34:56.854 [2024-11-25 14:33:01.686196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.854 [2024-11-25 14:33:01.686227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.854 qpair failed and we were unable to recover it. 00:34:56.854 [2024-11-25 14:33:01.686599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.854 [2024-11-25 14:33:01.686628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.854 qpair failed and we were unable to recover it. 00:34:56.854 [2024-11-25 14:33:01.686998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.854 [2024-11-25 14:33:01.687026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.854 qpair failed and we were unable to recover it. 00:34:56.854 [2024-11-25 14:33:01.687453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.854 [2024-11-25 14:33:01.687485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.854 qpair failed and we were unable to recover it. 00:34:56.854 [2024-11-25 14:33:01.687855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.854 [2024-11-25 14:33:01.687893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.854 qpair failed and we were unable to recover it. 00:34:56.854 [2024-11-25 14:33:01.688298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.854 [2024-11-25 14:33:01.688328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.854 qpair failed and we were unable to recover it. 00:34:56.854 [2024-11-25 14:33:01.688692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.854 [2024-11-25 14:33:01.688721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.854 qpair failed and we were unable to recover it. 00:34:56.854 [2024-11-25 14:33:01.689086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.854 [2024-11-25 14:33:01.689116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.854 qpair failed and we were unable to recover it. 00:34:56.854 [2024-11-25 14:33:01.689293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.854 [2024-11-25 14:33:01.689322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.854 qpair failed and we were unable to recover it. 00:34:56.854 [2024-11-25 14:33:01.689701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.854 [2024-11-25 14:33:01.689730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.854 qpair failed and we were unable to recover it. 00:34:56.854 [2024-11-25 14:33:01.689989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.854 [2024-11-25 14:33:01.690020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.854 qpair failed and we were unable to recover it. 00:34:56.854 [2024-11-25 14:33:01.690394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.854 [2024-11-25 14:33:01.690424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.854 qpair failed and we were unable to recover it. 00:34:56.854 [2024-11-25 14:33:01.690791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.854 [2024-11-25 14:33:01.690820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.854 qpair failed and we were unable to recover it. 00:34:56.854 [2024-11-25 14:33:01.691104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.854 [2024-11-25 14:33:01.691134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.854 qpair failed and we were unable to recover it. 00:34:56.854 [2024-11-25 14:33:01.691508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.854 [2024-11-25 14:33:01.691538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.854 qpair failed and we were unable to recover it. 00:34:56.854 [2024-11-25 14:33:01.691900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.691930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.692284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.692316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.692690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.692718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.693084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.693114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.693347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.693377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.693722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.693751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.693999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.694030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.694462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.694493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.694849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.694878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.695138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.695179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.695468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.695498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.695774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.695803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.696155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.696192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.696430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.696459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.696677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.696710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.697079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.697109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.697526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.697559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.697897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.697927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.698170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.698203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.698464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.698494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.698860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.698891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.699253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.699284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.699704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.699732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.700066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.700094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.700339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.700370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.700619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.700648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.701000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.701030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.701300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.701331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.701697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.701726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.702095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.702131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.855 qpair failed and we were unable to recover it. 00:34:56.855 [2024-11-25 14:33:01.702515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.855 [2024-11-25 14:33:01.702545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.702904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.702934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.703281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.703311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.703577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.703607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.703960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.703990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.704366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.704398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.704737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.704766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.705129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.705166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.705508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.705540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.705788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.705818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.706174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.706206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.706558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.706587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.706950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.706980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.707389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.707420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.707650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.707679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.707913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.707944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.708338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.708370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.708731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.708760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.709128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.709157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.709415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.709444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.709691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.709721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.710066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.710096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.710456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.710488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.710719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.710747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.711096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.711125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.711554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.711585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.711928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.711958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.712342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.712373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.712750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.712780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.856 [2024-11-25 14:33:01.713157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.856 [2024-11-25 14:33:01.713199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.856 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.713568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.713597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.713971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.714000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.714363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.714394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.714755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.714786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.714904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.714932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.715292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.715323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.715597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.715627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.715978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.716007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.716387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.716419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.716786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.716822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.717193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.717223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.717575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.717605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.718010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.718040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.718328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.718358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.718730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.718760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.719125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.719156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.719508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.719538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.719930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.719960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.720200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.720231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.720621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.720652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.721008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.721039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.721392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.721423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.721650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.721682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.722077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.722108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.722477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.722508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 [2024-11-25 14:33:01.722502] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.722552] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:56.857 [2024-11-25 14:33:01.722561] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:56.857 [2024-11-25 14:33:01.722568] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:56.857 [2024-11-25 14:33:01.722575] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:56.857 [2024-11-25 14:33:01.722888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.722920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.723289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.723319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.857 qpair failed and we were unable to recover it. 00:34:56.857 [2024-11-25 14:33:01.723672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.857 [2024-11-25 14:33:01.723701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.724054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.724084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.724329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.724360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.724695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.724577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:56.858 [2024-11-25 14:33:01.724728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.724820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:56.858 [2024-11-25 14:33:01.724820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:34:56.858 [2024-11-25 14:33:01.725078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.725107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.724622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:56.858 [2024-11-25 14:33:01.725529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.725562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.725825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.725856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.726083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.726113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.726364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.726396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.726763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.726793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.726960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.726989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.727352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.727382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.727574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.727602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.727998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.728028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.728255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.728289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.728676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.728705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.728930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.728962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.729340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.729371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.729741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.729770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.730090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.730120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.730501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.730532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.730904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.730933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.731191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.731221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.731470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.731500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.731873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.731903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.732155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.732198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.732575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.732605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.858 [2024-11-25 14:33:01.732826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.858 [2024-11-25 14:33:01.732862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.858 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.733239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.733270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.733644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.733674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.733913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.733942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.734298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.734328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.734569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.734605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.734840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.734870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.735220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.735251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.735539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.735568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.735914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.735943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.736185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.736216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.736580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.736609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.736988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.737017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.737405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.737436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.737799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.737828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.738058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.738094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.738507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.738537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.738897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.738927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.739282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.739312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.739556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.739586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.739951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.739981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.740337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.740367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.740730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.740760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.741038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.741068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.741305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.741336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.741672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.741701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.742080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.742109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.742512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.742542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.742910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.742939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.743270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.743301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.743686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.859 [2024-11-25 14:33:01.743715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.859 qpair failed and we were unable to recover it. 00:34:56.859 [2024-11-25 14:33:01.744082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.860 [2024-11-25 14:33:01.744112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.860 qpair failed and we were unable to recover it. 00:34:56.860 [2024-11-25 14:33:01.744392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.860 [2024-11-25 14:33:01.744424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.860 qpair failed and we were unable to recover it. 00:34:56.860 [2024-11-25 14:33:01.744673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.860 [2024-11-25 14:33:01.744702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.860 qpair failed and we were unable to recover it. 00:34:56.860 [2024-11-25 14:33:01.745079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.860 [2024-11-25 14:33:01.745108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.860 qpair failed and we were unable to recover it. 00:34:56.860 [2024-11-25 14:33:01.745419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.860 [2024-11-25 14:33:01.745449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.860 qpair failed and we were unable to recover it. 00:34:56.860 [2024-11-25 14:33:01.745778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.860 [2024-11-25 14:33:01.745806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.860 qpair failed and we were unable to recover it. 00:34:56.860 [2024-11-25 14:33:01.746018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.860 [2024-11-25 14:33:01.746048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.860 qpair failed and we were unable to recover it. 00:34:56.860 [2024-11-25 14:33:01.746400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.860 [2024-11-25 14:33:01.746431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.860 qpair failed and we were unable to recover it. 00:34:56.860 [2024-11-25 14:33:01.746726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.860 [2024-11-25 14:33:01.746755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.860 qpair failed and we were unable to recover it. 00:34:56.860 [2024-11-25 14:33:01.746988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.860 [2024-11-25 14:33:01.747018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.860 qpair failed and we were unable to recover it. 00:34:56.860 [2024-11-25 14:33:01.747258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.860 [2024-11-25 14:33:01.747289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.860 qpair failed and we were unable to recover it. 00:34:56.860 [2024-11-25 14:33:01.747620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.860 [2024-11-25 14:33:01.747649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.860 qpair failed and we were unable to recover it. 00:34:56.860 [2024-11-25 14:33:01.747907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.860 [2024-11-25 14:33:01.747935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.860 qpair failed and we were unable to recover it. 00:34:56.860 [2024-11-25 14:33:01.748305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.860 [2024-11-25 14:33:01.748335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.860 qpair failed and we were unable to recover it. 00:34:56.860 [2024-11-25 14:33:01.748707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.860 [2024-11-25 14:33:01.748737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.860 qpair failed and we were unable to recover it. 00:34:56.860 [2024-11-25 14:33:01.749098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.860 [2024-11-25 14:33:01.749128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.860 qpair failed and we were unable to recover it. 00:34:56.860 [2024-11-25 14:33:01.749509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.860 [2024-11-25 14:33:01.749540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.860 qpair failed and we were unable to recover it. 00:34:56.860 [2024-11-25 14:33:01.749779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.860 [2024-11-25 14:33:01.749808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.860 qpair failed and we were unable to recover it. 00:34:56.860 [2024-11-25 14:33:01.749952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.860 [2024-11-25 14:33:01.749981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.860 qpair failed and we were unable to recover it. 00:34:56.860 [2024-11-25 14:33:01.750371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.860 [2024-11-25 14:33:01.750404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.860 qpair failed and we were unable to recover it. 00:34:56.860 [2024-11-25 14:33:01.750752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.860 [2024-11-25 14:33:01.750782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.860 qpair failed and we were unable to recover it. 00:34:56.860 [2024-11-25 14:33:01.751146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.860 [2024-11-25 14:33:01.751184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.860 qpair failed and we were unable to recover it. 00:34:56.860 [2024-11-25 14:33:01.751506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.860 [2024-11-25 14:33:01.751535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.860 qpair failed and we were unable to recover it. 00:34:56.860 [2024-11-25 14:33:01.751663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.860 [2024-11-25 14:33:01.751692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.752061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.752090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.752433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.752464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.752829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.752859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.753217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.753248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.753463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.753495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.753844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.753873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.754220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.754252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.754643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.754673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.755037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.755067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.755426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.755456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.755828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.755858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.756218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.756249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.756582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.756611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.756984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.757013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.757320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.757352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.757740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.757769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.758006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.758034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.758327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.758364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.758709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.758738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.758965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.758993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.759326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.759357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.759733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.759763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.760122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.760152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.760449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.760479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.760741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.760771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.761122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.761151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.761519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.761549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.761665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.761694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.762070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.762101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.762467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.762498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.861 [2024-11-25 14:33:01.762682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.861 [2024-11-25 14:33:01.762710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.861 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.763101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.763130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.763553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.763583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.763731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.763760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.764428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.764568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.764875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.764912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.765444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.765552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.765836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.765874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.766111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.766141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.766405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.766438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.766731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.766762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.767140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.767178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.767537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.767568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.767929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.767959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.768268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.768300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.768684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.768715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.769058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.769091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.769340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.769371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.769836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.769865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.770288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.770319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.770716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.770745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.771150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.771191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.771570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.771600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.771964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.771995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.772343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.772375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.772742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.772772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.773174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.773205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.773464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.773503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.773883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.773913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.862 [2024-11-25 14:33:01.774280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.862 [2024-11-25 14:33:01.774311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.862 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.774697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.774727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.775104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.775134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.775360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.775392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.775643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.775674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.775968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.775998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.776221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.776252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.776620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.776649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.777030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.777061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.777207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.777274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.777679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.777710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.778042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.778073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.778458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.778490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.778868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.778899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.779331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.779363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.779758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.779788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.780140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.780204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.780446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.780476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.780764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.780794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.781087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.781118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.781522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.781553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.781909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.781939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.782195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.782229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.782472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.782503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.782880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.782912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.783286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.783318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.783663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.783693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.784077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.784106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.784493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.863 [2024-11-25 14:33:01.784522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.863 qpair failed and we were unable to recover it. 00:34:56.863 [2024-11-25 14:33:01.784863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.784891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.785268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.785300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.785532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.785560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.785919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.785949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.786317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.786349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.786628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.786659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.787012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.787042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.787401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.787434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.787529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.787557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.788067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.788208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.788653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.788692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.788913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.788945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.789437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.789541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.790000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.790038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.790442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.790477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.790778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.790809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.791100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.791132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.791405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.791436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.791702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.791735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.791978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.792007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.792471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.792502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.792845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.792876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.793244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.793276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.793628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.793658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.794015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.794046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.794461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.794493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.794855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.794884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.795254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.795285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.795646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.864 [2024-11-25 14:33:01.795676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.864 qpair failed and we were unable to recover it. 00:34:56.864 [2024-11-25 14:33:01.796053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.796083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.796508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.796539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.796910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.796941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.797094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.797123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.797604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.797636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.797978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.798010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.798269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.798301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.798566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.798596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.798972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.799002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.799210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.799243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.799569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.799599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.799819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.799849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.799994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.800025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.800430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.800461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.800801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.800830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.801066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.801097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.801434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.801465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.801725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.801757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.802025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.802055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.802419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.802451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.802836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.802873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.803273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.803305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.803681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.803711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.804089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.804119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.804393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.804424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.804797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.804826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.805198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.805230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.805464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.865 [2024-11-25 14:33:01.805494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.865 qpair failed and we were unable to recover it. 00:34:56.865 [2024-11-25 14:33:01.805722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.866 [2024-11-25 14:33:01.805751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.866 qpair failed and we were unable to recover it. 00:34:56.866 [2024-11-25 14:33:01.806129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.866 [2024-11-25 14:33:01.806171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.866 qpair failed and we were unable to recover it. 00:34:56.866 [2024-11-25 14:33:01.806544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.866 [2024-11-25 14:33:01.806574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.866 qpair failed and we were unable to recover it. 00:34:56.866 [2024-11-25 14:33:01.806745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.866 [2024-11-25 14:33:01.806775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.866 qpair failed and we were unable to recover it. 00:34:56.866 [2024-11-25 14:33:01.807182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.866 [2024-11-25 14:33:01.807213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.866 qpair failed and we were unable to recover it. 00:34:56.866 [2024-11-25 14:33:01.807596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.866 [2024-11-25 14:33:01.807627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.866 qpair failed and we were unable to recover it. 00:34:56.866 [2024-11-25 14:33:01.807995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.866 [2024-11-25 14:33:01.808026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.866 qpair failed and we were unable to recover it. 00:34:56.866 [2024-11-25 14:33:01.808258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.866 [2024-11-25 14:33:01.808289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.866 qpair failed and we were unable to recover it. 00:34:56.866 [2024-11-25 14:33:01.808507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.866 [2024-11-25 14:33:01.808537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.866 qpair failed and we were unable to recover it. 00:34:56.866 [2024-11-25 14:33:01.808899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.866 [2024-11-25 14:33:01.808929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.866 qpair failed and we were unable to recover it. 00:34:56.866 [2024-11-25 14:33:01.809270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.866 [2024-11-25 14:33:01.809301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.866 qpair failed and we were unable to recover it. 00:34:56.866 [2024-11-25 14:33:01.809670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.866 [2024-11-25 14:33:01.809700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.866 qpair failed and we were unable to recover it. 00:34:56.866 [2024-11-25 14:33:01.810081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.866 [2024-11-25 14:33:01.810111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.866 qpair failed and we were unable to recover it. 00:34:56.866 [2024-11-25 14:33:01.810464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.866 [2024-11-25 14:33:01.810495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.866 qpair failed and we were unable to recover it. 00:34:56.866 [2024-11-25 14:33:01.810705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.866 [2024-11-25 14:33:01.810737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.866 qpair failed and we were unable to recover it. 00:34:56.866 [2024-11-25 14:33:01.811096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.866 [2024-11-25 14:33:01.811126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.866 qpair failed and we were unable to recover it. 00:34:56.866 [2024-11-25 14:33:01.811467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.866 [2024-11-25 14:33:01.811499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.866 qpair failed and we were unable to recover it. 00:34:56.866 [2024-11-25 14:33:01.811854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.866 [2024-11-25 14:33:01.811885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.866 qpair failed and we were unable to recover it. 00:34:56.866 [2024-11-25 14:33:01.812324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.866 [2024-11-25 14:33:01.812355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.866 qpair failed and we were unable to recover it. 00:34:56.866 [2024-11-25 14:33:01.812741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.866 [2024-11-25 14:33:01.812771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.866 qpair failed and we were unable to recover it. 00:34:56.866 [2024-11-25 14:33:01.813146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.866 [2024-11-25 14:33:01.813188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.866 qpair failed and we were unable to recover it. 00:34:56.866 [2024-11-25 14:33:01.813466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.866 [2024-11-25 14:33:01.813497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.866 qpair failed and we were unable to recover it. 00:34:56.866 [2024-11-25 14:33:01.813850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.866 [2024-11-25 14:33:01.813881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.866 qpair failed and we were unable to recover it. 00:34:56.866 [2024-11-25 14:33:01.814108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.866 [2024-11-25 14:33:01.814139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.814546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.814577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.814942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.814971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.815214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.815247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.815666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.815696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.816032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.816062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.816459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.816489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.816859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.816889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.817264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.817296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.817630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.817680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.818033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.818064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.818299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.818331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.818701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.818731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.819121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.819151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.819509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.819539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.819742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.819773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.820117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.820149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.820566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.820597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.820947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.820978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.821315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.821347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.821554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.821585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.821960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.821992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.822340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.822372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.822753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.822783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.823140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.823179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.823540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.823570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.823943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.823973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.824235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.824268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.824617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.824647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.825067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.825098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.825475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.825507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.825865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.867 [2024-11-25 14:33:01.825895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.867 qpair failed and we were unable to recover it. 00:34:56.867 [2024-11-25 14:33:01.826243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.826275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.826599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.826628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.826730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.826760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.827131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.827169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.827547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.827578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.827913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.827943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.828294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.828324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.828670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.828701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.829068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.829100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.829480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.829512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.829841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.829872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.830277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.830309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.830687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.830717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.831094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.831124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.831384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.831416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.831768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.831799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.832038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.832069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.832424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.832462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.832682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.832713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.833081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.833111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.833518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.833549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.833912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.833942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.834191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.834224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.834610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.834639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.835008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.835037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.835395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.835425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.835650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.835679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.835893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.835923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.836250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.836283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.836662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.836693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.868 [2024-11-25 14:33:01.836936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.868 [2024-11-25 14:33:01.836966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.868 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.837365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.837397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.837742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.837771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.838034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.838064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.838174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.838204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.838576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.838607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.838878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.838911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.839136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.839179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.839509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.839538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.839909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.839938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.840226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.840258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.840693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.840722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.840964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.840995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.841319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.841350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.841717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.841747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.842125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.842154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.842520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.842550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.842923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.842952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.843323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.843353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.843581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.843610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.843973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.844002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.844228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.844259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.844484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.844513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.844891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.844920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.845289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.845321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.845719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.845748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.846130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.846182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.846295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.846330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.846696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.846725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.869 [2024-11-25 14:33:01.846966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.869 [2024-11-25 14:33:01.847000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.869 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.847382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.847413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.847778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.847806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.848183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.848214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.848554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.848585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.848948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.848977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.849322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.849353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.849746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.849776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.850119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.850148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.850515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.850545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.850915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.850944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.851315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.851346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.851717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.851748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.852048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.852079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.852420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.852450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.852786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.852815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.853187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.853218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.853592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.853621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.853997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.854027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.854386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.854417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.854793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.854823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.855028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.855057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.855425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.855456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.855702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.855732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.856080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.856108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.856489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.856521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.856886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.856916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.857132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.857169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.857592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.857622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.857851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.857881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.870 [2024-11-25 14:33:01.858258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.870 [2024-11-25 14:33:01.858289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.870 qpair failed and we were unable to recover it. 00:34:56.871 [2024-11-25 14:33:01.858674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.871 [2024-11-25 14:33:01.858703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.871 qpair failed and we were unable to recover it. 00:34:56.871 [2024-11-25 14:33:01.859078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.871 [2024-11-25 14:33:01.859108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.871 qpair failed and we were unable to recover it. 00:34:56.871 [2024-11-25 14:33:01.859463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.871 [2024-11-25 14:33:01.859493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.871 qpair failed and we were unable to recover it. 00:34:56.871 [2024-11-25 14:33:01.859857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.871 [2024-11-25 14:33:01.859886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.871 qpair failed and we were unable to recover it. 00:34:56.871 [2024-11-25 14:33:01.860249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.871 [2024-11-25 14:33:01.860280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.871 qpair failed and we were unable to recover it. 00:34:56.871 [2024-11-25 14:33:01.860662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.871 [2024-11-25 14:33:01.860691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.871 qpair failed and we were unable to recover it. 00:34:56.871 [2024-11-25 14:33:01.861057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.871 [2024-11-25 14:33:01.861087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.871 qpair failed and we were unable to recover it. 00:34:56.871 [2024-11-25 14:33:01.861479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.871 [2024-11-25 14:33:01.861517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.871 qpair failed and we were unable to recover it. 00:34:56.871 [2024-11-25 14:33:01.861727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.871 [2024-11-25 14:33:01.861756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.871 qpair failed and we were unable to recover it. 00:34:56.871 [2024-11-25 14:33:01.862102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.871 [2024-11-25 14:33:01.862131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.871 qpair failed and we were unable to recover it. 00:34:56.871 [2024-11-25 14:33:01.862499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.871 [2024-11-25 14:33:01.862529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.871 qpair failed and we were unable to recover it. 00:34:56.871 [2024-11-25 14:33:01.862897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.871 [2024-11-25 14:33:01.862925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.871 qpair failed and we were unable to recover it. 00:34:56.871 [2024-11-25 14:33:01.863292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.871 [2024-11-25 14:33:01.863323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.871 qpair failed and we were unable to recover it. 00:34:56.871 [2024-11-25 14:33:01.863699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.871 [2024-11-25 14:33:01.863729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.871 qpair failed and we were unable to recover it. 00:34:56.871 [2024-11-25 14:33:01.864101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.871 [2024-11-25 14:33:01.864130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.871 qpair failed and we were unable to recover it. 00:34:56.871 [2024-11-25 14:33:01.864309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.871 [2024-11-25 14:33:01.864339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.871 qpair failed and we were unable to recover it. 00:34:56.871 [2024-11-25 14:33:01.864595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.871 [2024-11-25 14:33:01.864625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.871 qpair failed and we were unable to recover it. 00:34:56.871 [2024-11-25 14:33:01.864876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.871 [2024-11-25 14:33:01.864907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.871 qpair failed and we were unable to recover it. 00:34:56.871 [2024-11-25 14:33:01.865275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.871 [2024-11-25 14:33:01.865306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.871 qpair failed and we were unable to recover it. 00:34:56.871 [2024-11-25 14:33:01.865647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.871 [2024-11-25 14:33:01.865676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.871 qpair failed and we were unable to recover it. 00:34:56.871 [2024-11-25 14:33:01.866057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.871 [2024-11-25 14:33:01.866087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.871 qpair failed and we were unable to recover it. 00:34:56.871 [2024-11-25 14:33:01.866455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.871 [2024-11-25 14:33:01.866486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.871 qpair failed and we were unable to recover it. 00:34:56.871 [2024-11-25 14:33:01.866862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.871 [2024-11-25 14:33:01.866892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.871 qpair failed and we were unable to recover it. 00:34:56.871 [2024-11-25 14:33:01.867275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.871 [2024-11-25 14:33:01.867306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.871 qpair failed and we were unable to recover it. 00:34:56.871 [2024-11-25 14:33:01.867510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.871 [2024-11-25 14:33:01.867540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.871 qpair failed and we were unable to recover it. 00:34:56.871 [2024-11-25 14:33:01.867775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.871 [2024-11-25 14:33:01.867806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.871 qpair failed and we were unable to recover it. 00:34:56.872 [2024-11-25 14:33:01.868050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.872 [2024-11-25 14:33:01.868079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.872 qpair failed and we were unable to recover it. 00:34:56.872 [2024-11-25 14:33:01.868460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.872 [2024-11-25 14:33:01.868490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.872 qpair failed and we were unable to recover it. 00:34:56.872 [2024-11-25 14:33:01.868867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.872 [2024-11-25 14:33:01.868896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.872 qpair failed and we were unable to recover it. 00:34:56.872 [2024-11-25 14:33:01.869248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.872 [2024-11-25 14:33:01.869279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.872 qpair failed and we were unable to recover it. 00:34:56.872 [2024-11-25 14:33:01.869660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.872 [2024-11-25 14:33:01.869689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.872 qpair failed and we were unable to recover it. 00:34:56.872 [2024-11-25 14:33:01.869913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.872 [2024-11-25 14:33:01.869942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.872 qpair failed and we were unable to recover it. 00:34:56.872 [2024-11-25 14:33:01.870299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.872 [2024-11-25 14:33:01.870331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.872 qpair failed and we were unable to recover it. 00:34:56.872 [2024-11-25 14:33:01.870700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.872 [2024-11-25 14:33:01.870729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.872 qpair failed and we were unable to recover it. 00:34:56.872 [2024-11-25 14:33:01.871080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.872 [2024-11-25 14:33:01.871111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.872 qpair failed and we were unable to recover it. 00:34:56.872 [2024-11-25 14:33:01.871472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.872 [2024-11-25 14:33:01.871503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.872 qpair failed and we were unable to recover it. 00:34:56.872 [2024-11-25 14:33:01.871872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.872 [2024-11-25 14:33:01.871901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.872 qpair failed and we were unable to recover it. 00:34:56.872 [2024-11-25 14:33:01.872273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.872 [2024-11-25 14:33:01.872304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.872 qpair failed and we were unable to recover it. 00:34:56.872 [2024-11-25 14:33:01.872676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.872 [2024-11-25 14:33:01.872706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.872 qpair failed and we were unable to recover it. 00:34:56.872 [2024-11-25 14:33:01.873042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.872 [2024-11-25 14:33:01.873071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.872 qpair failed and we were unable to recover it. 00:34:56.872 [2024-11-25 14:33:01.873451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.872 [2024-11-25 14:33:01.873481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.872 qpair failed and we were unable to recover it. 00:34:56.872 [2024-11-25 14:33:01.873602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.872 [2024-11-25 14:33:01.873633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.872 qpair failed and we were unable to recover it. 00:34:56.872 [2024-11-25 14:33:01.874092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.872 [2024-11-25 14:33:01.874121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.872 qpair failed and we were unable to recover it. 00:34:56.872 [2024-11-25 14:33:01.874496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.872 [2024-11-25 14:33:01.874528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.872 qpair failed and we were unable to recover it. 00:34:56.872 [2024-11-25 14:33:01.874871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.872 [2024-11-25 14:33:01.874901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.872 qpair failed and we were unable to recover it. 00:34:56.872 [2024-11-25 14:33:01.875268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.872 [2024-11-25 14:33:01.875300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.872 qpair failed and we were unable to recover it. 00:34:56.872 [2024-11-25 14:33:01.875676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.872 [2024-11-25 14:33:01.875706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.872 qpair failed and we were unable to recover it. 00:34:56.872 [2024-11-25 14:33:01.876065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.872 [2024-11-25 14:33:01.876102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.872 qpair failed and we were unable to recover it. 00:34:56.872 [2024-11-25 14:33:01.876463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.872 [2024-11-25 14:33:01.876494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.872 qpair failed and we were unable to recover it. 00:34:56.872 [2024-11-25 14:33:01.876824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.872 [2024-11-25 14:33:01.876853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.872 qpair failed and we were unable to recover it. 00:34:56.872 [2024-11-25 14:33:01.877229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.872 [2024-11-25 14:33:01.877259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.872 qpair failed and we were unable to recover it. 00:34:56.872 [2024-11-25 14:33:01.877502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.872 [2024-11-25 14:33:01.877532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.872 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.877772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.877805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.878171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.878202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.878548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.878577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.878803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.878832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.879218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.879248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.879576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.879605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.879984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.880014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.880341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.880372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.880735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.880763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.880989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.881017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.881141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.881192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.881648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.881677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.881974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.882005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.882383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.882415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.882780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.882810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.883165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.883196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.883540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.883570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.883801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.883831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.884190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.884221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.884576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.884606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.884991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.885021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.885380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.885411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.885756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.885786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.886192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.886223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.886471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.886500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.886861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.886891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.887259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.887291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.887498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.887527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.887883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.887913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.888296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.888327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.873 qpair failed and we were unable to recover it. 00:34:56.873 [2024-11-25 14:33:01.888697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.873 [2024-11-25 14:33:01.888725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.889097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.889127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.889534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.889566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.889955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.889984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.890332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.890363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.890731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.890773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.890989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.891020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.891297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.891328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.891683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.891712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.891932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.891961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.892302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.892332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.892494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.892522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.892884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.892913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.893128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.893156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.893521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.893550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.893905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.893935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.894313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.894344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.894727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.894755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.895126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.895155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.895518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.895548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.895927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.895955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.896183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.896215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.896567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.896597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.896958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.896987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.897342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.897373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.897672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.897701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.898073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.898104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.898478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.874 [2024-11-25 14:33:01.898510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.874 qpair failed and we were unable to recover it. 00:34:56.874 [2024-11-25 14:33:01.898742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.898772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.899124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.899154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.899524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.899554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.899772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.899802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.900210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.900243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.900561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.900591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.900938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.900969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.901322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.901353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.901736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.901766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.902139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.902179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.902520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.902550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.902797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.902827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.903179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.903210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.903563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.903595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.903961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.903990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.904371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.904403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.904752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.904781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.905223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.905261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.905607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.905636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.905862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.905891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.906310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.906340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.906700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.906728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.907099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.907129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.907540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.907571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.907916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.907945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.908316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.908348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.908716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.908745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.909126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.909156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.909421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.909450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.909702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.909732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.910078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.910108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.910475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.875 [2024-11-25 14:33:01.910506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.875 qpair failed and we were unable to recover it. 00:34:56.875 [2024-11-25 14:33:01.910873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.910903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.911270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.911300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.911681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.911710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.911935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.911965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.912195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.912227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.912584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.912614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.912717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.912745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.913264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.913374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.913798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.913835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.914416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.914524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.914809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.914841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.915179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.915210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.915565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.915595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.915973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.916002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.916352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.916384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.916756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.916787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.917167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.917198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.917547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.917576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.917965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.917994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.918376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.918406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.918750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.918780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.919148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.919189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.919528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.919558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.919926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.919956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.920304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.920334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.920584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.920624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.920999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.921030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.921447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.921480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.921834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.921863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.876 [2024-11-25 14:33:01.922198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.876 [2024-11-25 14:33:01.922229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.876 qpair failed and we were unable to recover it. 00:34:56.877 [2024-11-25 14:33:01.922543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.877 [2024-11-25 14:33:01.922572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.877 qpair failed and we were unable to recover it. 00:34:56.877 [2024-11-25 14:33:01.922932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.877 [2024-11-25 14:33:01.922961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.877 qpair failed and we were unable to recover it. 00:34:56.877 [2024-11-25 14:33:01.923179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.877 [2024-11-25 14:33:01.923209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.877 qpair failed and we were unable to recover it. 00:34:56.877 [2024-11-25 14:33:01.923534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:56.877 [2024-11-25 14:33:01.923563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:56.877 qpair failed and we were unable to recover it. 00:34:57.151 [2024-11-25 14:33:01.923959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.151 [2024-11-25 14:33:01.923991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.151 qpair failed and we were unable to recover it. 00:34:57.151 [2024-11-25 14:33:01.924281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.151 [2024-11-25 14:33:01.924316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.151 qpair failed and we were unable to recover it. 00:34:57.151 [2024-11-25 14:33:01.924660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.151 [2024-11-25 14:33:01.924690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.151 qpair failed and we were unable to recover it. 00:34:57.151 [2024-11-25 14:33:01.924916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.151 [2024-11-25 14:33:01.924947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.151 qpair failed and we were unable to recover it. 00:34:57.151 [2024-11-25 14:33:01.925194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.151 [2024-11-25 14:33:01.925225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.151 qpair failed and we were unable to recover it. 00:34:57.151 [2024-11-25 14:33:01.925634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.151 [2024-11-25 14:33:01.925664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.151 qpair failed and we were unable to recover it. 00:34:57.151 [2024-11-25 14:33:01.926037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.151 [2024-11-25 14:33:01.926066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.151 qpair failed and we were unable to recover it. 00:34:57.151 [2024-11-25 14:33:01.926414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.151 [2024-11-25 14:33:01.926447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.151 qpair failed and we were unable to recover it. 00:34:57.151 [2024-11-25 14:33:01.926819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.151 [2024-11-25 14:33:01.926851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.151 qpair failed and we were unable to recover it. 00:34:57.151 [2024-11-25 14:33:01.927226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.151 [2024-11-25 14:33:01.927257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.151 qpair failed and we were unable to recover it. 00:34:57.151 [2024-11-25 14:33:01.927622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.151 [2024-11-25 14:33:01.927651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.151 qpair failed and we were unable to recover it. 00:34:57.151 [2024-11-25 14:33:01.928018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.928049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.928407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.928438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.928802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.928831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.929200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.929232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.929609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.929639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.929996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.930025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.930376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.930407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.930766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.930795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.931180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.931211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.931425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.931454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.931771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.931801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.932185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.932218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.932552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.932581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.932967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.932997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.933344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.933374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.933754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.933783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.934175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.934206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.934586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.934618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.934991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.935021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.935404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.935435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.935779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.935808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.936180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.936211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.936548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.936578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.936847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.936878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.937212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.937244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.937522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.937553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.937898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.937928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.938291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.152 [2024-11-25 14:33:01.938322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.152 qpair failed and we were unable to recover it. 00:34:57.152 [2024-11-25 14:33:01.938675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.938704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.939080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.939110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.939487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.939518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.939769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.939798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.940056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.940086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.940460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.940492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.940897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.940927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.941136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.941176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.941507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.941538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.941752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.941781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.942142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.942195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.942558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.942588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.942929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.942959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.943190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.943225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.943585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.943615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.943973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.944001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.944376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.944406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.944765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.944794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.945172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.945203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.945434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.945470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.945710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.945740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.946127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.946157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.946509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.946539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.946902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.946931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.947149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.947189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.947436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.947465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.947701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.947732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.948101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.948132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.948449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.948479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.153 qpair failed and we were unable to recover it. 00:34:57.153 [2024-11-25 14:33:01.948844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.153 [2024-11-25 14:33:01.948874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.949246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.949278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.949634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.949664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.950030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.950061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.950442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.950474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.950852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.950881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.951237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.951268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.951606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.951636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.951996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.952026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.952299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.952331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.952679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.952708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.953081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.953110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.953331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.953361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.953617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.953646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.954022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.954052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.954396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.954427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.954774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.954803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.955024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.955056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.955265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.955297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.955675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.955705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.956074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.956103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.956359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.956391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.956752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.956781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.957151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.957190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.957563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.957594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.957959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.957989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.958331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.958363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.958465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.958494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.958716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.958747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.959119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.154 [2024-11-25 14:33:01.959148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.154 qpair failed and we were unable to recover it. 00:34:57.154 [2024-11-25 14:33:01.959514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.959553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.959879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.959909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.960122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.960153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.960529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.960561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.960909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.960941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.961290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.961321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.961683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.961714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.962080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.962109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.962454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.962485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.962824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.962854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.963226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.963258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.963610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.963640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.964022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.964053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.964426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.964459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.964823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.964853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.965202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.965233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.965462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.965491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.965830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.965860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.966211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.966242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.966636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.966666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.967009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.967039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.967392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.967423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.967801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.967831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.968188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.968219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.968318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.968345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.968722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.968750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.969014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.969043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.969288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.969319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.155 [2024-11-25 14:33:01.969566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.155 [2024-11-25 14:33:01.969595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.155 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.969940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.969969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.970335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.970366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.970711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.970741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.970873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.970901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.971239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.971270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.971650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.971679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.972045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.972074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.972447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.972477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.972699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.972728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.972965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.972995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.973231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.973261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.973616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.973652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.973891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.973920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.974251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.974281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.974620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.974649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.975032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.975061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.975399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.975430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.975675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.975705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.976058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.976089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.976463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.976494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.976877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.976906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.977270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.977300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.977634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.977662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.977921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.977950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.978310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.978340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.978722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.978751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.979129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.979176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.979538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.979569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.979893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.979922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.980293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.980325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.980717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.980748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.156 [2024-11-25 14:33:01.980980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.156 [2024-11-25 14:33:01.981008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.156 qpair failed and we were unable to recover it. 00:34:57.157 [2024-11-25 14:33:01.981179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.157 [2024-11-25 14:33:01.981209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.157 qpair failed and we were unable to recover it. 00:34:57.157 [2024-11-25 14:33:01.981640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.157 [2024-11-25 14:33:01.981670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.157 qpair failed and we were unable to recover it. 00:34:57.157 [2024-11-25 14:33:01.982077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.157 [2024-11-25 14:33:01.982107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.157 qpair failed and we were unable to recover it. 00:34:57.157 [2024-11-25 14:33:01.982482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.157 [2024-11-25 14:33:01.982513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.157 qpair failed and we were unable to recover it. 00:34:57.157 [2024-11-25 14:33:01.982884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.157 [2024-11-25 14:33:01.982915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.157 qpair failed and we were unable to recover it. 00:34:57.157 [2024-11-25 14:33:01.983195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.157 [2024-11-25 14:33:01.983227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.157 qpair failed and we were unable to recover it. 00:34:57.157 [2024-11-25 14:33:01.983608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.157 [2024-11-25 14:33:01.983638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.157 qpair failed and we were unable to recover it. 00:34:57.157 [2024-11-25 14:33:01.983980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.157 [2024-11-25 14:33:01.984009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.157 qpair failed and we were unable to recover it. 00:34:57.157 [2024-11-25 14:33:01.984271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.157 [2024-11-25 14:33:01.984302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.157 qpair failed and we were unable to recover it. 00:34:57.157 [2024-11-25 14:33:01.984656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.157 [2024-11-25 14:33:01.984685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.157 qpair failed and we were unable to recover it. 00:34:57.157 [2024-11-25 14:33:01.985030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.157 [2024-11-25 14:33:01.985060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.157 qpair failed and we were unable to recover it. 00:34:57.157 [2024-11-25 14:33:01.985425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.157 [2024-11-25 14:33:01.985456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.157 qpair failed and we were unable to recover it. 00:34:57.157 [2024-11-25 14:33:01.985805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.157 [2024-11-25 14:33:01.985834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.157 qpair failed and we were unable to recover it. 00:34:57.157 [2024-11-25 14:33:01.986093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.157 [2024-11-25 14:33:01.986122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.157 qpair failed and we were unable to recover it. 00:34:57.157 [2024-11-25 14:33:01.986386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.157 [2024-11-25 14:33:01.986417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.157 qpair failed and we were unable to recover it. 00:34:57.157 [2024-11-25 14:33:01.986640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.157 [2024-11-25 14:33:01.986669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.157 qpair failed and we were unable to recover it. 00:34:57.157 [2024-11-25 14:33:01.987043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.157 [2024-11-25 14:33:01.987073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.157 qpair failed and we were unable to recover it. 00:34:57.157 [2024-11-25 14:33:01.987446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.157 [2024-11-25 14:33:01.987477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.157 qpair failed and we were unable to recover it. 00:34:57.157 [2024-11-25 14:33:01.987847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.157 [2024-11-25 14:33:01.987878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.157 qpair failed and we were unable to recover it. 00:34:57.157 [2024-11-25 14:33:01.988235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.157 [2024-11-25 14:33:01.988272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.157 qpair failed and we were unable to recover it. 00:34:57.157 [2024-11-25 14:33:01.988480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.157 [2024-11-25 14:33:01.988508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.157 qpair failed and we were unable to recover it. 00:34:57.157 [2024-11-25 14:33:01.988749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.157 [2024-11-25 14:33:01.988778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.157 qpair failed and we were unable to recover it. 00:34:57.157 [2024-11-25 14:33:01.989007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.989039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.989409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.989440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.989662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.989691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.990044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.990074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.990421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.990452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.990869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.990898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.991148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.991187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.991546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.991576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.991782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.991812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.991935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.991964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a4000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.992475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.992585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.992991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.993031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.993456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.993563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.993975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.994012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.994456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.994565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.995021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.995058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.995451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.995484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.995817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.995847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.996228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.996259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.996627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.996659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.996909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.996946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.997280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.997312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.997710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.997739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.998101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.998130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.998552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.998583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.998950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.998981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.999335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.999366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:01.999624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:01.999654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:02.000004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:02.000034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:02.000249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:02.000279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:02.000682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:02.000711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:02.000925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.158 [2024-11-25 14:33:02.000954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.158 qpair failed and we were unable to recover it. 00:34:57.158 [2024-11-25 14:33:02.001313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.001344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.001724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.001755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.002101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.002130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.002512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.002544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.002762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.002792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.003172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.003211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.003422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.003452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.003691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.003721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.004141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.004185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.004544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.004574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.004955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.004984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.005321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.005351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.005591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.005621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.005989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.006020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.006393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.006423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.006729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.006758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.007128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.007157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.007535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.007564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.007941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.007970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.008341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.008373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.008715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.008745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.009018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.009051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.009377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.009407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.009785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.009814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.010189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.010220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.010570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.010599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.010969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.010999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.011369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.011402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.011668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.011697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.012108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.012138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.159 [2024-11-25 14:33:02.012506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.159 [2024-11-25 14:33:02.012536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.159 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.012896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.012926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.013298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.013329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.013587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.013619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.014003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.014032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.014389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.014420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.014786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.014814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.015181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.015212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.015555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.015585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.015958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.015988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.016338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.016369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.016581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.016610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.016942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.016971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.017326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.017357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.017454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.017482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.017968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.018089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.018595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.018700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.019112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.019149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.019608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.019714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.020027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.020065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.020308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.020343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.020558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.020588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.020854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.020883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.021251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.021282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.021552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.021581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.021941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.021971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.022291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.022322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.022550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.022581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.022881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.022911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.023295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.023327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.023723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.023752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.160 [2024-11-25 14:33:02.024113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.160 [2024-11-25 14:33:02.024143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.160 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.024401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.024435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.024784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.024814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.024915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.024943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.025317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.025348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.025498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.025527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.025920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.025950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.026276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.026308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.026657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.026686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.027045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.027076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.027519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.027549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.027922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.027951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.028181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.028212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.028572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.028601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.028982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.029013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.029386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.029418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.029779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.029810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.030175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.030205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.030546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.030575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.030961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.030991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.031294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.031327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.031550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.031579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.031951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.031979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.032195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.032227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.032441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.032477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.032801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.032830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.033054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.033083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.033450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.033481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.033720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.033753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.033968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.161 [2024-11-25 14:33:02.033998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.161 qpair failed and we were unable to recover it. 00:34:57.161 [2024-11-25 14:33:02.034295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.034326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.034700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.034729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.035067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.035096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.035448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.035479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.035809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.035838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.036228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.036258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.036476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.036506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.036894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.036922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.037318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.037350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.037718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.037747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.037993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.038022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.038406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.038437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.038683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.038713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.039061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.039090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.039472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.039503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.039850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.039879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.040240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.040272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.040687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.040717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.041080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.041109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.041410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.041441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.041807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.041837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.042192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.042224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.042430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.042460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.042805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.042836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.043062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.043090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.043446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.043477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.043853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.043882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.044243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.044273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.044633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.162 [2024-11-25 14:33:02.044662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.162 qpair failed and we were unable to recover it. 00:34:57.162 [2024-11-25 14:33:02.044928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.044958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.163 [2024-11-25 14:33:02.045077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.045107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.163 [2024-11-25 14:33:02.045373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.045404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.163 [2024-11-25 14:33:02.045752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.045781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.163 [2024-11-25 14:33:02.046028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.046058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.163 [2024-11-25 14:33:02.046340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.046379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.163 [2024-11-25 14:33:02.046746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.046775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.163 [2024-11-25 14:33:02.047146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.047183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.163 [2024-11-25 14:33:02.047493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.047522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.163 [2024-11-25 14:33:02.047773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.047803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.163 [2024-11-25 14:33:02.048154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.048191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.163 [2024-11-25 14:33:02.048339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.048368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.163 [2024-11-25 14:33:02.048732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.048761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.163 [2024-11-25 14:33:02.049000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.049030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.163 [2024-11-25 14:33:02.049146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.049196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.163 [2024-11-25 14:33:02.049599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.049628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.163 [2024-11-25 14:33:02.049853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.049882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.163 [2024-11-25 14:33:02.050229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.050261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.163 [2024-11-25 14:33:02.050653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.050682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.163 [2024-11-25 14:33:02.051051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.051081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.163 [2024-11-25 14:33:02.051430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.051461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.163 [2024-11-25 14:33:02.051825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.051857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.163 [2024-11-25 14:33:02.052206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.052236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.163 [2024-11-25 14:33:02.052451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.052480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.163 [2024-11-25 14:33:02.052872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.052901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.163 [2024-11-25 14:33:02.053118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.053146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.163 [2024-11-25 14:33:02.053502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.163 [2024-11-25 14:33:02.053531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.163 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.053748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.053779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.054153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.054193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.054399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.054428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.054785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.054815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.055086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.055118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.055349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.055380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.055758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.055787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.056173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.056203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.056529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.056559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.056978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.057008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.057243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.057274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.057632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.057661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.058043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.058072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.058435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.058466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.058721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.058751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.058980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.059010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.059297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.059327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.059713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.059742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.060112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.060148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.060477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.060506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.060860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.060888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.061270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.061303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.061548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.061578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.061953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.061982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.062390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.062421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.062774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.062803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.063011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.063041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.063413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.063444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.063782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.063811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.064045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.164 [2024-11-25 14:33:02.064073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.164 qpair failed and we were unable to recover it. 00:34:57.164 [2024-11-25 14:33:02.064424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.165 [2024-11-25 14:33:02.064455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.165 qpair failed and we were unable to recover it. 00:34:57.165 [2024-11-25 14:33:02.064842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.165 [2024-11-25 14:33:02.064871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.165 qpair failed and we were unable to recover it. 00:34:57.165 [2024-11-25 14:33:02.065106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.165 [2024-11-25 14:33:02.065135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.165 qpair failed and we were unable to recover it. 00:34:57.165 [2024-11-25 14:33:02.065531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.165 [2024-11-25 14:33:02.065562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.165 qpair failed and we were unable to recover it. 00:34:57.165 [2024-11-25 14:33:02.065824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.165 [2024-11-25 14:33:02.065854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.165 qpair failed and we were unable to recover it. 00:34:57.165 [2024-11-25 14:33:02.066100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.165 [2024-11-25 14:33:02.066133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.165 qpair failed and we were unable to recover it. 00:34:57.165 [2024-11-25 14:33:02.066391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.165 [2024-11-25 14:33:02.066422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.165 qpair failed and we were unable to recover it. 00:34:57.165 [2024-11-25 14:33:02.066669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.165 [2024-11-25 14:33:02.066699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.165 qpair failed and we were unable to recover it. 00:34:57.165 [2024-11-25 14:33:02.066963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.165 [2024-11-25 14:33:02.066996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.165 qpair failed and we were unable to recover it. 00:34:57.165 [2024-11-25 14:33:02.067349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.165 [2024-11-25 14:33:02.067379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.165 qpair failed and we were unable to recover it. 00:34:57.165 [2024-11-25 14:33:02.067588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.165 [2024-11-25 14:33:02.067619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.165 qpair failed and we were unable to recover it. 00:34:57.165 [2024-11-25 14:33:02.067829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.165 [2024-11-25 14:33:02.067857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.165 qpair failed and we were unable to recover it. 00:34:57.165 [2024-11-25 14:33:02.068109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.165 [2024-11-25 14:33:02.068143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.165 qpair failed and we were unable to recover it. 00:34:57.165 [2024-11-25 14:33:02.068510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.165 [2024-11-25 14:33:02.068540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.165 qpair failed and we were unable to recover it. 00:34:57.165 [2024-11-25 14:33:02.068753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.165 [2024-11-25 14:33:02.068781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.165 qpair failed and we were unable to recover it. 00:34:57.165 [2024-11-25 14:33:02.069038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.165 [2024-11-25 14:33:02.069069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.165 qpair failed and we were unable to recover it. 00:34:57.165 [2024-11-25 14:33:02.069332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.165 [2024-11-25 14:33:02.069363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.165 qpair failed and we were unable to recover it. 00:34:57.165 [2024-11-25 14:33:02.069727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.165 [2024-11-25 14:33:02.069756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.165 qpair failed and we were unable to recover it. 00:34:57.165 [2024-11-25 14:33:02.070130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.165 [2024-11-25 14:33:02.070169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.165 qpair failed and we were unable to recover it. 00:34:57.165 [2024-11-25 14:33:02.070527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.165 [2024-11-25 14:33:02.070557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.165 qpair failed and we were unable to recover it. 00:34:57.165 [2024-11-25 14:33:02.070897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.165 [2024-11-25 14:33:02.070926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.165 qpair failed and we were unable to recover it. 00:34:57.165 [2024-11-25 14:33:02.071270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.165 [2024-11-25 14:33:02.071302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.165 qpair failed and we were unable to recover it. 00:34:57.165 [2024-11-25 14:33:02.071666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.165 [2024-11-25 14:33:02.071695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.165 qpair failed and we were unable to recover it. 00:34:57.165 [2024-11-25 14:33:02.072085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.165 [2024-11-25 14:33:02.072113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.165 qpair failed and we were unable to recover it. 00:34:57.165 [2024-11-25 14:33:02.072473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.165 [2024-11-25 14:33:02.072504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.165 qpair failed and we were unable to recover it. 00:34:57.165 [2024-11-25 14:33:02.072867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.165 [2024-11-25 14:33:02.072897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.165 qpair failed and we were unable to recover it. 00:34:57.165 [2024-11-25 14:33:02.073285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.073316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.073689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.073719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.073980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.074019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.074255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.074287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.074495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.074523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.074788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.074819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.075046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.075076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.075435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.075466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.075855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.075885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.076257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.076288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.076542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.076571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.076938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.076967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.077248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.077279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.077649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.077678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.078036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.078065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.078415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.078446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.078668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.078698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.078952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.078980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.079366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.079398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.079752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.079781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.080112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.080140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.080519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.080549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.080879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.080907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.081239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.081270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.081656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.081685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.082051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.082080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.082458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.082489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.082870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.166 [2024-11-25 14:33:02.082899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.166 qpair failed and we were unable to recover it. 00:34:57.166 [2024-11-25 14:33:02.083106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.083135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.083538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.083570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.083795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.083829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.084226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.084259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.084604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.084633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.085010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.085040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.085381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.085412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.085775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.085808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.086026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.086057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.086426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.086458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.086827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.086857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.087216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.087246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.087603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.087632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.088000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.088028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.088379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.088417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.088781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.088810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.089190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.089240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.089606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.089637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.089889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.089918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.090172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.090202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.090582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.090611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.090969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.090997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.091379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.091409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.091770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.091800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.092189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.092219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.092438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.092467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.092726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.092755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.092995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.093024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.093292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.093326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.093670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.167 [2024-11-25 14:33:02.093699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.167 qpair failed and we were unable to recover it. 00:34:57.167 [2024-11-25 14:33:02.094070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.168 [2024-11-25 14:33:02.094098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.168 qpair failed and we were unable to recover it. 00:34:57.168 [2024-11-25 14:33:02.094474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.168 [2024-11-25 14:33:02.094505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.168 qpair failed and we were unable to recover it. 00:34:57.168 [2024-11-25 14:33:02.094868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.168 [2024-11-25 14:33:02.094896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.168 qpair failed and we were unable to recover it. 00:34:57.168 [2024-11-25 14:33:02.095266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.168 [2024-11-25 14:33:02.095296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.168 qpair failed and we were unable to recover it. 00:34:57.168 [2024-11-25 14:33:02.095536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.168 [2024-11-25 14:33:02.095566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.168 qpair failed and we were unable to recover it. 00:34:57.168 [2024-11-25 14:33:02.095828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.168 [2024-11-25 14:33:02.095856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.168 qpair failed and we were unable to recover it. 00:34:57.168 [2024-11-25 14:33:02.096227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.168 [2024-11-25 14:33:02.096257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.168 qpair failed and we were unable to recover it. 00:34:57.168 [2024-11-25 14:33:02.096616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.168 [2024-11-25 14:33:02.096646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.168 qpair failed and we were unable to recover it. 00:34:57.168 [2024-11-25 14:33:02.097000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.168 [2024-11-25 14:33:02.097029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.168 qpair failed and we were unable to recover it. 00:34:57.168 [2024-11-25 14:33:02.097252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.168 [2024-11-25 14:33:02.097283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.168 qpair failed and we were unable to recover it. 00:34:57.168 [2024-11-25 14:33:02.097651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.168 [2024-11-25 14:33:02.097680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.168 qpair failed and we were unable to recover it. 00:34:57.168 [2024-11-25 14:33:02.098042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.168 [2024-11-25 14:33:02.098073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.168 qpair failed and we were unable to recover it. 00:34:57.168 [2024-11-25 14:33:02.098463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.168 [2024-11-25 14:33:02.098493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.168 qpair failed and we were unable to recover it. 00:34:57.168 [2024-11-25 14:33:02.098865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.168 [2024-11-25 14:33:02.098894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.168 qpair failed and we were unable to recover it. 00:34:57.168 [2024-11-25 14:33:02.099110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.168 [2024-11-25 14:33:02.099138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.168 qpair failed and we were unable to recover it. 00:34:57.168 [2024-11-25 14:33:02.099389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.168 [2024-11-25 14:33:02.099422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.168 qpair failed and we were unable to recover it. 00:34:57.168 [2024-11-25 14:33:02.099664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.168 [2024-11-25 14:33:02.099693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.168 qpair failed and we were unable to recover it. 00:34:57.168 [2024-11-25 14:33:02.100084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.168 [2024-11-25 14:33:02.100113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.168 qpair failed and we were unable to recover it. 00:34:57.168 [2024-11-25 14:33:02.100495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.168 [2024-11-25 14:33:02.100525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.168 qpair failed and we were unable to recover it. 00:34:57.168 [2024-11-25 14:33:02.100626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.168 [2024-11-25 14:33:02.100654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.168 qpair failed and we were unable to recover it. 00:34:57.168 [2024-11-25 14:33:02.101107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.168 [2024-11-25 14:33:02.101222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.168 qpair failed and we were unable to recover it. 00:34:57.168 [2024-11-25 14:33:02.101644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.168 [2024-11-25 14:33:02.101754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.168 qpair failed and we were unable to recover it. 00:34:57.168 [2024-11-25 14:33:02.102040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.168 [2024-11-25 14:33:02.102076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.168 qpair failed and we were unable to recover it. 00:34:57.168 [2024-11-25 14:33:02.102525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.168 [2024-11-25 14:33:02.102633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.168 qpair failed and we were unable to recover it. 00:34:57.168 [2024-11-25 14:33:02.102969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.168 [2024-11-25 14:33:02.103020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.168 qpair failed and we were unable to recover it. 00:34:57.168 [2024-11-25 14:33:02.103390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.168 [2024-11-25 14:33:02.103424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.168 qpair failed and we were unable to recover it. 00:34:57.169 [2024-11-25 14:33:02.103656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.169 [2024-11-25 14:33:02.103686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.169 qpair failed and we were unable to recover it. 00:34:57.169 [2024-11-25 14:33:02.104070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.169 [2024-11-25 14:33:02.104099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.169 qpair failed and we were unable to recover it. 00:34:57.169 [2024-11-25 14:33:02.104341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.169 [2024-11-25 14:33:02.104373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.169 qpair failed and we were unable to recover it. 00:34:57.169 [2024-11-25 14:33:02.104623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.169 [2024-11-25 14:33:02.104653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.169 qpair failed and we were unable to recover it. 00:34:57.169 [2024-11-25 14:33:02.105007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.169 [2024-11-25 14:33:02.105036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.169 qpair failed and we were unable to recover it. 00:34:57.169 [2024-11-25 14:33:02.105426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.169 [2024-11-25 14:33:02.105456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.169 qpair failed and we were unable to recover it. 00:34:57.169 [2024-11-25 14:33:02.105798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.169 [2024-11-25 14:33:02.105827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.169 qpair failed and we were unable to recover it. 00:34:57.169 [2024-11-25 14:33:02.106049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.169 [2024-11-25 14:33:02.106079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.169 qpair failed and we were unable to recover it. 00:34:57.169 [2024-11-25 14:33:02.106446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.169 [2024-11-25 14:33:02.106477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.169 qpair failed and we were unable to recover it. 00:34:57.169 [2024-11-25 14:33:02.106702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.169 [2024-11-25 14:33:02.106732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.169 qpair failed and we were unable to recover it. 00:34:57.169 [2024-11-25 14:33:02.107100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.169 [2024-11-25 14:33:02.107131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.169 qpair failed and we were unable to recover it. 00:34:57.169 [2024-11-25 14:33:02.107518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.169 [2024-11-25 14:33:02.107548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.169 qpair failed and we were unable to recover it. 00:34:57.169 [2024-11-25 14:33:02.107905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.169 [2024-11-25 14:33:02.107935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.169 qpair failed and we were unable to recover it. 00:34:57.169 [2024-11-25 14:33:02.108319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.169 [2024-11-25 14:33:02.108350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.169 qpair failed and we were unable to recover it. 00:34:57.169 [2024-11-25 14:33:02.108691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.169 [2024-11-25 14:33:02.108720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.169 qpair failed and we were unable to recover it. 00:34:57.169 [2024-11-25 14:33:02.109092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.169 [2024-11-25 14:33:02.109121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.169 qpair failed and we were unable to recover it. 00:34:57.169 [2024-11-25 14:33:02.109532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.169 [2024-11-25 14:33:02.109563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.169 qpair failed and we were unable to recover it. 00:34:57.169 [2024-11-25 14:33:02.109858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.169 [2024-11-25 14:33:02.109887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.169 qpair failed and we were unable to recover it. 00:34:57.169 [2024-11-25 14:33:02.110259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.169 [2024-11-25 14:33:02.110290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.169 qpair failed and we were unable to recover it. 00:34:57.169 [2024-11-25 14:33:02.110678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.169 [2024-11-25 14:33:02.110708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.169 qpair failed and we were unable to recover it. 00:34:57.169 [2024-11-25 14:33:02.111051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.169 [2024-11-25 14:33:02.111080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.169 qpair failed and we were unable to recover it. 00:34:57.169 [2024-11-25 14:33:02.111449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.169 [2024-11-25 14:33:02.111481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.169 qpair failed and we were unable to recover it. 00:34:57.169 [2024-11-25 14:33:02.111864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.169 [2024-11-25 14:33:02.111893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.169 qpair failed and we were unable to recover it. 00:34:57.169 [2024-11-25 14:33:02.112261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.169 [2024-11-25 14:33:02.112291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.169 qpair failed and we were unable to recover it. 00:34:57.169 [2024-11-25 14:33:02.112685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.169 [2024-11-25 14:33:02.112714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.169 qpair failed and we were unable to recover it. 00:34:57.169 [2024-11-25 14:33:02.113079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.113108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.113447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.113478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.113742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.113779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.114123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.114153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.114534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.114564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.114903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.114933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.115270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.115301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.115644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.115673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.116041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.116071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.116420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.116450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.116831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.116861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.117220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.117251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.117468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.117498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.117869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.117907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.118282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.118314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.118668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.118697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.118899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.118929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.119308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.119339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.119702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.119732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.120093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.120122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.120469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.120500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.120856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.120886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.121244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.121274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.121668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.121697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.121913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.121942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.122304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.122335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.122593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.170 [2024-11-25 14:33:02.122625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.170 qpair failed and we were unable to recover it. 00:34:57.170 [2024-11-25 14:33:02.122973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.171 [2024-11-25 14:33:02.123003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.171 qpair failed and we were unable to recover it. 00:34:57.171 [2024-11-25 14:33:02.123383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.171 [2024-11-25 14:33:02.123413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.171 qpair failed and we were unable to recover it. 00:34:57.171 [2024-11-25 14:33:02.123779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.171 [2024-11-25 14:33:02.123809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.171 qpair failed and we were unable to recover it. 00:34:57.171 [2024-11-25 14:33:02.123903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.171 [2024-11-25 14:33:02.123934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.171 qpair failed and we were unable to recover it. 00:34:57.171 [2024-11-25 14:33:02.124198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad7e00 is same with the state(6) to be set 00:34:57.171 [2024-11-25 14:33:02.124715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.171 [2024-11-25 14:33:02.124765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.171 qpair failed and we were unable to recover it. 00:34:57.171 [2024-11-25 14:33:02.125135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.171 [2024-11-25 14:33:02.125183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.171 qpair failed and we were unable to recover it. 00:34:57.171 [2024-11-25 14:33:02.125615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.171 [2024-11-25 14:33:02.125723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.171 qpair failed and we were unable to recover it. 00:34:57.171 [2024-11-25 14:33:02.126017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.171 [2024-11-25 14:33:02.126053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.171 qpair failed and we were unable to recover it. 00:34:57.171 [2024-11-25 14:33:02.126512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.171 [2024-11-25 14:33:02.126620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.171 qpair failed and we were unable to recover it. 00:34:57.171 [2024-11-25 14:33:02.127001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.171 [2024-11-25 14:33:02.127034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.171 qpair failed and we were unable to recover it. 00:34:57.171 [2024-11-25 14:33:02.127261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.171 [2024-11-25 14:33:02.127291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.171 qpair failed and we were unable to recover it. 00:34:57.171 [2024-11-25 14:33:02.127649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.171 [2024-11-25 14:33:02.127678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.171 qpair failed and we were unable to recover it. 00:34:57.171 [2024-11-25 14:33:02.127982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.171 [2024-11-25 14:33:02.128010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.171 qpair failed and we were unable to recover it. 00:34:57.171 [2024-11-25 14:33:02.128366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.171 [2024-11-25 14:33:02.128397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.171 qpair failed and we were unable to recover it. 00:34:57.171 [2024-11-25 14:33:02.128627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.171 [2024-11-25 14:33:02.128656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.171 qpair failed and we were unable to recover it. 00:34:57.171 [2024-11-25 14:33:02.128923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.171 [2024-11-25 14:33:02.128956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.171 qpair failed and we were unable to recover it. 00:34:57.171 [2024-11-25 14:33:02.129242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.171 [2024-11-25 14:33:02.129274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.171 qpair failed and we were unable to recover it. 00:34:57.171 [2024-11-25 14:33:02.129652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.171 [2024-11-25 14:33:02.129681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.171 qpair failed and we were unable to recover it. 00:34:57.171 [2024-11-25 14:33:02.130050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.171 [2024-11-25 14:33:02.130080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.171 qpair failed and we were unable to recover it. 00:34:57.171 [2024-11-25 14:33:02.130419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.171 [2024-11-25 14:33:02.130449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.171 qpair failed and we were unable to recover it. 00:34:57.171 [2024-11-25 14:33:02.130814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.171 [2024-11-25 14:33:02.130843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.171 qpair failed and we were unable to recover it. 00:34:57.171 [2024-11-25 14:33:02.131227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.171 [2024-11-25 14:33:02.131257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.171 qpair failed and we were unable to recover it. 00:34:57.171 [2024-11-25 14:33:02.131621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.171 [2024-11-25 14:33:02.131651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.171 qpair failed and we were unable to recover it. 00:34:57.171 [2024-11-25 14:33:02.132013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.171 [2024-11-25 14:33:02.132042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.171 qpair failed and we were unable to recover it. 00:34:57.171 [2024-11-25 14:33:02.132393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.171 [2024-11-25 14:33:02.132424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.171 qpair failed and we were unable to recover it. 00:34:57.171 [2024-11-25 14:33:02.132792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.171 [2024-11-25 14:33:02.132822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.133194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.133225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.133470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.133499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.133757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.133786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.134154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.134208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.134552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.134581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.134957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.134986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.135369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.135400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.135789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.135817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.135955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.135983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.136331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.136363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.136696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.136727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.137076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.137106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.137453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.137483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.137576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.137611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.137981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.138011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.138379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.138410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.138783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.138811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.139176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.139206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.139575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.139605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.139943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.139972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.140332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.140362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.140580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.140609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.140945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.140973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.141313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.141343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.141715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.141743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.142011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.142040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.142276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.142308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.172 qpair failed and we were unable to recover it. 00:34:57.172 [2024-11-25 14:33:02.142752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.172 [2024-11-25 14:33:02.142782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.173 qpair failed and we were unable to recover it. 00:34:57.173 [2024-11-25 14:33:02.143001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.173 [2024-11-25 14:33:02.143031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.173 qpair failed and we were unable to recover it. 00:34:57.173 [2024-11-25 14:33:02.143365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.173 [2024-11-25 14:33:02.143395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.173 qpair failed and we were unable to recover it. 00:34:57.173 [2024-11-25 14:33:02.143624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.173 [2024-11-25 14:33:02.143653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.173 qpair failed and we were unable to recover it. 00:34:57.173 [2024-11-25 14:33:02.144024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.173 [2024-11-25 14:33:02.144053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.173 qpair failed and we were unable to recover it. 00:34:57.173 [2024-11-25 14:33:02.144423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.173 [2024-11-25 14:33:02.144453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.173 qpair failed and we were unable to recover it. 00:34:57.173 [2024-11-25 14:33:02.144808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.173 [2024-11-25 14:33:02.144837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.173 qpair failed and we were unable to recover it. 00:34:57.173 [2024-11-25 14:33:02.145202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.173 [2024-11-25 14:33:02.145233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.173 qpair failed and we were unable to recover it. 00:34:57.173 [2024-11-25 14:33:02.145621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.173 [2024-11-25 14:33:02.145650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.173 qpair failed and we were unable to recover it. 00:34:57.173 [2024-11-25 14:33:02.145870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.173 [2024-11-25 14:33:02.145900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.173 qpair failed and we were unable to recover it. 00:34:57.173 [2024-11-25 14:33:02.146194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.173 [2024-11-25 14:33:02.146224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.173 qpair failed and we were unable to recover it. 00:34:57.173 [2024-11-25 14:33:02.146514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.173 [2024-11-25 14:33:02.146542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.173 qpair failed and we were unable to recover it. 00:34:57.173 [2024-11-25 14:33:02.146931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.173 [2024-11-25 14:33:02.146960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.173 qpair failed and we were unable to recover it. 00:34:57.173 [2024-11-25 14:33:02.147091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.173 [2024-11-25 14:33:02.147125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.173 qpair failed and we were unable to recover it. 00:34:57.173 [2024-11-25 14:33:02.147532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.173 [2024-11-25 14:33:02.147563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.173 qpair failed and we were unable to recover it. 00:34:57.173 [2024-11-25 14:33:02.147911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.173 [2024-11-25 14:33:02.147941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.173 qpair failed and we were unable to recover it. 00:34:57.173 [2024-11-25 14:33:02.148148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.173 [2024-11-25 14:33:02.148187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.173 qpair failed and we were unable to recover it. 00:34:57.173 [2024-11-25 14:33:02.148559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.173 [2024-11-25 14:33:02.148587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.173 qpair failed and we were unable to recover it. 00:34:57.173 [2024-11-25 14:33:02.148930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.173 [2024-11-25 14:33:02.148959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.173 qpair failed and we were unable to recover it. 00:34:57.173 [2024-11-25 14:33:02.149312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.173 [2024-11-25 14:33:02.149342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.173 qpair failed and we were unable to recover it. 00:34:57.173 [2024-11-25 14:33:02.149724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.173 [2024-11-25 14:33:02.149753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.173 qpair failed and we were unable to recover it. 00:34:57.173 [2024-11-25 14:33:02.149982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.173 [2024-11-25 14:33:02.150011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.173 qpair failed and we were unable to recover it. 00:34:57.173 [2024-11-25 14:33:02.150341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.173 [2024-11-25 14:33:02.150371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.173 qpair failed and we were unable to recover it. 00:34:57.173 [2024-11-25 14:33:02.150736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.173 [2024-11-25 14:33:02.150766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.173 qpair failed and we were unable to recover it. 00:34:57.173 [2024-11-25 14:33:02.150980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.173 [2024-11-25 14:33:02.151010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.173 qpair failed and we were unable to recover it. 00:34:57.173 [2024-11-25 14:33:02.151427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.173 [2024-11-25 14:33:02.151457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.151817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.151853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.152189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.152219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.152558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.152587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.152949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.152978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.153321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.153351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.153725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.153754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.154052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.154081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.154416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.154446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.154785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.154814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.155187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.155217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.155588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.155616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.155994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.156022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.156380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.156410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.156787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.156816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.157072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.157105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.157480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.157510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.157874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.157902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.158268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.158298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.158682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.158710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.159075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.159104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.159451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.159481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.159728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.159756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.160118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.160147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.160490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.160521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.160749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.160777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.161106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.161135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.161571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.161601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.161964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.161993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.162359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.162389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.162768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.162798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.174 [2024-11-25 14:33:02.163156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.174 [2024-11-25 14:33:02.163192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.174 qpair failed and we were unable to recover it. 00:34:57.175 [2024-11-25 14:33:02.163441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.175 [2024-11-25 14:33:02.163469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.175 qpair failed and we were unable to recover it. 00:34:57.175 [2024-11-25 14:33:02.163705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.175 [2024-11-25 14:33:02.163738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.175 qpair failed and we were unable to recover it. 00:34:57.175 [2024-11-25 14:33:02.164087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.175 [2024-11-25 14:33:02.164116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.175 qpair failed and we were unable to recover it. 00:34:57.175 [2024-11-25 14:33:02.164486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.175 [2024-11-25 14:33:02.164517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.175 qpair failed and we were unable to recover it. 00:34:57.175 [2024-11-25 14:33:02.164878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.175 [2024-11-25 14:33:02.164909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.175 qpair failed and we were unable to recover it. 00:34:57.175 [2024-11-25 14:33:02.165171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.175 [2024-11-25 14:33:02.165201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.175 qpair failed and we were unable to recover it. 00:34:57.175 [2024-11-25 14:33:02.165549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.175 [2024-11-25 14:33:02.165578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.175 qpair failed and we were unable to recover it. 00:34:57.175 [2024-11-25 14:33:02.165791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.175 [2024-11-25 14:33:02.165822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.175 qpair failed and we were unable to recover it. 00:34:57.175 [2024-11-25 14:33:02.166197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.175 [2024-11-25 14:33:02.166229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.175 qpair failed and we were unable to recover it. 00:34:57.175 [2024-11-25 14:33:02.166464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.175 [2024-11-25 14:33:02.166493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.175 qpair failed and we were unable to recover it. 00:34:57.175 [2024-11-25 14:33:02.166736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.175 [2024-11-25 14:33:02.166767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.175 qpair failed and we were unable to recover it. 00:34:57.175 [2024-11-25 14:33:02.167100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.175 [2024-11-25 14:33:02.167130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.175 qpair failed and we were unable to recover it. 00:34:57.175 [2024-11-25 14:33:02.167509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.175 [2024-11-25 14:33:02.167541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.175 qpair failed and we were unable to recover it. 00:34:57.175 [2024-11-25 14:33:02.167905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.175 [2024-11-25 14:33:02.167935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.175 qpair failed and we were unable to recover it. 00:34:57.175 [2024-11-25 14:33:02.168216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.175 [2024-11-25 14:33:02.168248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.175 qpair failed and we were unable to recover it. 00:34:57.175 [2024-11-25 14:33:02.168604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.175 [2024-11-25 14:33:02.168634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.175 qpair failed and we were unable to recover it. 00:34:57.175 [2024-11-25 14:33:02.168997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.175 [2024-11-25 14:33:02.169026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.175 qpair failed and we were unable to recover it. 00:34:57.175 [2024-11-25 14:33:02.169396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.175 [2024-11-25 14:33:02.169426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.175 qpair failed and we were unable to recover it. 00:34:57.175 [2024-11-25 14:33:02.169800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.175 [2024-11-25 14:33:02.169829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.175 qpair failed and we were unable to recover it. 00:34:57.175 [2024-11-25 14:33:02.170199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.175 [2024-11-25 14:33:02.170228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.175 qpair failed and we were unable to recover it. 00:34:57.175 [2024-11-25 14:33:02.170576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.175 [2024-11-25 14:33:02.170606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.175 qpair failed and we were unable to recover it. 00:34:57.175 [2024-11-25 14:33:02.170999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.175 [2024-11-25 14:33:02.171029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.175 qpair failed and we were unable to recover it. 00:34:57.175 [2024-11-25 14:33:02.171363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.175 [2024-11-25 14:33:02.171393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.175 qpair failed and we were unable to recover it. 00:34:57.175 [2024-11-25 14:33:02.171675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.175 [2024-11-25 14:33:02.171708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.175 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.171917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.171947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.172175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.172205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.172542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.172570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.172841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.172871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.173242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.173272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.173674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.173702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.173924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.173953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.174296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.174327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.174543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.174572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.174930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.174959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.175182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.175216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.175463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.175492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.175874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.175910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.176128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.176165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.176522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.176550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.176919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.176948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.177301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.177331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.177692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.177721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.177982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.178011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.178391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.178422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.178789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.178819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.179037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.179066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.179404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.179434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.179674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.179706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.180072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.180101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.180475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.180505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.180854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.180884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.181114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.181144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.181388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.176 [2024-11-25 14:33:02.181420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.176 qpair failed and we were unable to recover it. 00:34:57.176 [2024-11-25 14:33:02.181701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.181730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.182081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.182111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.182460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.182490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.182844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.182873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.183246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.183276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.183597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.183628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.183721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.183748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.184122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.184151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.184521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.184551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.184814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.184843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.185089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.185118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.185373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.185408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.185656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.185685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.186028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.186058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.186324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.186356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.186624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.186654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.187049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.187079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.187291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.187322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.187665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.187695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.188046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.188077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.188326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.188356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.188735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.188765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.188999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.189028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.189281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.189317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.189537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.189565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.189680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.189712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.190130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.190167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.190506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.190535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.190795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.177 [2024-11-25 14:33:02.190825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.177 qpair failed and we were unable to recover it. 00:34:57.177 [2024-11-25 14:33:02.191184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.191216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.191450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.191479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.191717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.191745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.192180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.192211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.192476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.192505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.192898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.192927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.193178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.193208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.193605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.193636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.193970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.193999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.194367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.194398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.194669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.194698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.195052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.195081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.195307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.195338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.195708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.195737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.196095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.196125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.196514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.196543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.196929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.196959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.197312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.197342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.197722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.197751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.197976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.198005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.198301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.198331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.198552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.198581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.198944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.198973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.199333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.199363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.199714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.199742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.200103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.200132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.200485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.200517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.200870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.200900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.201278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.201309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.178 [2024-11-25 14:33:02.201685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.178 [2024-11-25 14:33:02.201714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.178 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.202084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.202113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.202493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.202524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.202882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.202912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.203278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.203309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.203693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.203728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.204115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.204146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.204497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.204528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.204746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.204777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.205155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.205193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.205512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.205541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.205764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.205794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.206175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.206205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.206438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.206468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.206784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.206813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.207173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.207203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.207526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.207555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.207934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.207963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.208227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.208261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.208606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.208636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.209006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.209035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.209390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.209420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.209784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.209813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.210178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.210208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.210580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.210609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.210818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.210847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.211214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.211244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.211475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.211505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.211916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.211946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.212304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.179 [2024-11-25 14:33:02.212334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.179 qpair failed and we were unable to recover it. 00:34:57.179 [2024-11-25 14:33:02.212678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.212708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.212920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.212950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.213285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.213316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.213683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.213715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.213959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.213989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.214339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.214370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.214741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.214770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.215141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.215184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.215568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.215599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.215871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.215900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.216142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.216179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.216556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.216585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.216965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.216995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.217328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.217359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.217582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.217611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.217961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.217996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.218233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.218263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.218549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.218578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.218960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.218989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.219368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.219400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.219612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.219642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.219981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.220011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.220382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.220412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.220622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.220651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.221024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.221053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.221448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.221478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.221850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.221879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.222103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.222137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.222548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.222578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.222944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.180 [2024-11-25 14:33:02.222973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.180 qpair failed and we were unable to recover it. 00:34:57.180 [2024-11-25 14:33:02.223240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.181 [2024-11-25 14:33:02.223271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.181 qpair failed and we were unable to recover it. 00:34:57.181 [2024-11-25 14:33:02.223664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.181 [2024-11-25 14:33:02.223693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.181 qpair failed and we were unable to recover it. 00:34:57.181 [2024-11-25 14:33:02.224062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.181 [2024-11-25 14:33:02.224091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.181 qpair failed and we were unable to recover it. 00:34:57.181 [2024-11-25 14:33:02.224285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.181 [2024-11-25 14:33:02.224318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.181 qpair failed and we were unable to recover it. 00:34:57.181 [2024-11-25 14:33:02.224708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.181 [2024-11-25 14:33:02.224738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.181 qpair failed and we were unable to recover it. 00:34:57.181 [2024-11-25 14:33:02.225110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.181 [2024-11-25 14:33:02.225139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.181 qpair failed and we were unable to recover it. 00:34:57.181 [2024-11-25 14:33:02.225537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.181 [2024-11-25 14:33:02.225567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.181 qpair failed and we were unable to recover it. 00:34:57.181 [2024-11-25 14:33:02.225797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.181 [2024-11-25 14:33:02.225829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.181 qpair failed and we were unable to recover it. 00:34:57.181 [2024-11-25 14:33:02.226221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.181 [2024-11-25 14:33:02.226252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.181 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.226634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.226667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.226969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.227001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.227336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.227367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.227731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.227762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.228121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.228150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.228532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.228563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.228942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.228972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.229235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.229266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.229493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.229523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.229845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.229875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.230244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.230275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.230658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.230688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.230914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.230944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.231199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.231231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.231576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.231605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.231817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.231846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.232194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.232230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.232595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.232624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.232994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.233023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.233385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.233416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.233657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.233687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.234041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.234072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.234448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.234480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.234737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.234766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.235114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.235143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.235503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.235533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.235875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.235904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.236125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.236154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.236545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.236575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.236939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.236969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.237298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.237329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.237684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.237714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.238073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.238103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.238335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.238366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.238707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.238737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.456 [2024-11-25 14:33:02.239098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.456 [2024-11-25 14:33:02.239128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.456 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.239469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.239499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.239856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.239887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.240240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.240271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.240638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.240667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.241025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.241055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.241403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.241434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.241799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.241828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.242113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.242142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.242371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.242402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.242758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.242787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.243198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.243228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.243607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.243636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.243841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.243871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.244238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.244268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.244640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.244669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.244900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.244933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.245296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.245328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.245562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.245592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.245889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.245919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.246280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.246314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.246673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.246709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.246921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.246951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.247319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.247349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.247707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.247736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.248106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.248134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.248506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.248536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.248899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.248929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.249170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.249203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.249557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.249586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.249686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.249714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.250090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.250118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.250487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.250519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.250714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.250743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.250978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.251008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.251342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.251373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.251760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.251788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.252154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.252194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.252553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.457 [2024-11-25 14:33:02.252582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.457 qpair failed and we were unable to recover it. 00:34:57.457 [2024-11-25 14:33:02.252805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.252837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.253204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.253235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.253459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.253489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.253857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.253886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.254133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.254170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.254512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.254542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.254900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.254930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.255302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.255332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.255721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.255751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.256112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.256142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.256502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.256532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.256906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.256936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.257312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.257342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.257556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.257587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.257955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.257984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.258361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.258392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.258770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.258800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.258953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.258984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.259341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.259372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.259746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.259776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.260146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.260187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.260535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.260566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.260781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.260817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.261179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.261212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.261420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.261450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.261801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.261831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.262215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.262245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.262625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.262654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.263030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.263059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.263459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.263489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.263863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.263891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.264265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.264296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.264634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.264665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.264909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.264938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.265301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.265332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.265706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.265739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.266094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.266126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.266524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.458 [2024-11-25 14:33:02.266557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.458 qpair failed and we were unable to recover it. 00:34:57.458 [2024-11-25 14:33:02.266936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.266966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.267183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.267214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.267588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.267617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.267983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.268012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.268375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.268405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.268776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.268807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.269036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.269065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.269277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.269309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.269675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.269704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.269801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.269829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70a0000b90 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.270293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.270407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.270737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.270776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.271191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.271226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.271623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.271653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.272005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.272036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.272477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.272588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.272998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.273035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.273392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.273425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.273798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.273828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.274191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.274224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.274560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.274591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.274960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.274989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.275338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.275369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.275736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.275766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.276142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.276190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.276564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.276593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.276917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.276946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.277111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.277140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.277542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.277571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.277947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.277977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.278196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.278229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.278611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.278639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.278899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.278930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.279188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.279219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.279487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.279517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.279895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.279924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.280314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.280345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.280713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.280742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.280972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.459 [2024-11-25 14:33:02.281008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.459 qpair failed and we were unable to recover it. 00:34:57.459 [2024-11-25 14:33:02.281383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.281414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.281627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.281657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.282026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.282056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.282379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.282410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.282842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.282872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.283238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.283269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.283645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.283675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.284040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.284070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.284340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.284376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.284733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.284762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.285107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.285138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.285507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.285538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.285753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.285782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.286149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.286196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.286538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.286568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.286780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.286808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.287219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.287251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.287466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.287497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.287946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.287975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.288326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.288358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.288703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.288732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.289054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.289083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.289452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.289482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.289789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.289819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.290184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.290214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.290563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.290592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.290856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.290896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.291243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.291275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.291640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.291669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.292028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.292059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.292408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.292439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.292783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.460 [2024-11-25 14:33:02.292811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.460 qpair failed and we were unable to recover it. 00:34:57.460 [2024-11-25 14:33:02.293131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.293168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.293541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.293571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.293951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.293982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.294359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.294390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.294732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.294764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.295140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.295182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.295550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.295580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.295909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.295939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.296157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.296204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.296608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.296639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.296853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.296883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.297279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.297313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.297684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.297714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.298079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.298109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.298476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.298508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.298715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.298745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.299099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.299129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.299520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.299551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.299775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.299806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.300179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.300211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.300433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.300465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.300898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.300929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.301279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.301311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.301569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.301600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.302002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.302033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.302381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.302412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.302637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.302667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.302914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.302943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.303192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.303224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.303579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.303609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.303979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.304009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.304391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.304421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.304795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.304825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.305191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.305224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.305596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.305626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.306003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.306038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.306398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.306430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.306806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.306836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.307192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.461 [2024-11-25 14:33:02.307225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.461 qpair failed and we were unable to recover it. 00:34:57.461 [2024-11-25 14:33:02.307494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.307526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.307892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.307922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.308301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.308332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.308700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.308730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.309098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.309128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.309392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.309424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.309782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.309812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.310082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.310112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.310603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.310636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.310984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.311014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.311403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.311435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.311652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.311683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.311887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.311917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.312012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.312042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.312325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.312357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.312642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.312674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.313027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.313058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.313283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.313314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.313684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.313714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.314083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.314115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.314499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.314531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.314874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.314904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.315251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.315282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.315632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.315663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.316032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.316063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.316429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.316459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.316827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.316858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.317223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.317257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.317655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.317686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.317897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.317928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.318298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.318330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.318707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.318737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.319098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.319130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.319490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.319520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.319735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.319766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.320125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.320156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.320517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.320548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.320886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.462 [2024-11-25 14:33:02.320918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.462 qpair failed and we were unable to recover it. 00:34:57.462 [2024-11-25 14:33:02.321135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.321176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.321427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.321459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.321821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.321852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.322189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.322220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.322471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.322501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.322848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.322878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.323259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.323289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.323634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.323664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.323888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.323918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.324283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.324315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.324562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.324595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.324940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.324972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.325317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.325347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.325726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.325758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.326109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.326140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.326264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.326295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.326448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.326488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.326871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.326902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.327264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.327296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.327503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.327533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.327758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.327788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.328175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.328210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.328425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.328459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.328823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.328854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.329196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.329229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.329590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.329620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.329842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.329879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.330022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.330053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.330378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.330410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.330776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.330807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.331172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.331204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.331454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.331484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.331851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.331881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.332232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.332264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.332628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.332661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.333019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.333050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.333411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.333446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.333795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.333825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.334223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.463 [2024-11-25 14:33:02.334256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.463 qpair failed and we were unable to recover it. 00:34:57.463 [2024-11-25 14:33:02.334608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.334640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.335008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.335040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.335255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.335287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.335642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.335672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.336044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.336074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.336410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.336441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.336784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.336814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.337177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.337209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.337595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.337624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.337993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.338024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.338121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.338151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.338562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.338594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.338940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.338972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.339337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.339369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.339744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.339774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.340145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.340190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.340553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.340583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.340949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.340978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.341330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.341362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.341730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.341760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.341975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.342006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.342339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.342373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.342718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.342748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.343120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.343151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.343376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.343408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.343777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.343807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.344180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.344211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.344557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.344588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.344801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.344835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.345187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.345219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.345470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.345501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.345863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.345892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.346152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.346197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.346580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.346612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.346987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.347017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.347292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.347322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.347560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.347590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.347823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.347853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.348221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.464 [2024-11-25 14:33:02.348252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.464 qpair failed and we were unable to recover it. 00:34:57.464 [2024-11-25 14:33:02.348565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.348594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.348954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.348984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.349322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.349352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.349718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.349747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.350094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.350123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.350540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.350571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.350939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.350969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.351304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.351335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.351717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.351746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.352112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.352141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.352519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.352549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.352780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.352808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.353227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.353258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.353566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.353595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.353962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.353992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.354458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.354488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.354862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.354897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.355259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.355290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.355506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.355535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.355886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.355915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.356143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.356186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.356438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.356467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.356831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.356860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.357073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.357103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.357468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.357498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.357725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.357754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.358132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.358174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.358540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.358570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.358940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.358969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.359299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.359330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.359683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.359712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.359812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.359840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.360301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.360409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.465 [2024-11-25 14:33:02.360848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.465 [2024-11-25 14:33:02.360884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.465 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.361062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.361098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.361498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.361605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.362058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.362094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.362350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.362383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.362748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.362777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.363145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.363185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.363408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.363437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.363781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.363810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.364060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.364089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.364481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.364529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.364871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.364900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.365176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.365207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.365599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.365628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.366010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.366040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.366385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.366416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.366509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.366538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.366867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.366897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.367119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.367149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.367549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.367580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.367976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.368007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.368385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.368416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.368788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.368817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.369204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.369234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.369492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.369522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.369892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.369921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.370288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.370319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.370713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.370742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.371091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.371119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.371507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.371538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.371768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.371796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.372025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.372054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.372313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.372344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.372707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.372735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.373105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.373134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.373490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.373521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.373794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.373827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.374047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.374078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.374322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.466 [2024-11-25 14:33:02.374352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.466 qpair failed and we were unable to recover it. 00:34:57.466 [2024-11-25 14:33:02.374626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.374656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.375102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.375130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.375482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.375512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.375889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.375918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.376205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.376236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.376486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.376515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.376873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.376903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.377283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.377313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.377672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.377700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.377819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.377847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.378214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.378245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.378494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.378533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.378792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.378823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.379179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.379211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.379578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.379607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.379831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.379861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.380213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.380243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.380598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.380627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.380858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.380887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.381097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.381126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.381489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.381519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.381830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.381859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.382227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.382259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.382474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.382503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.382863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.382891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.383168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.383202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.383585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.383614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.383930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.383959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.384341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.384372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.384749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.384779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.385080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.385109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.385513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.385544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.385908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.385938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.386320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.386352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.386719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.386749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.387116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.387145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 [2024-11-25 14:33:02.387501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.387531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:57.467 [2024-11-25 14:33:02.387741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.467 [2024-11-25 14:33:02.387775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.467 qpair failed and we were unable to recover it. 00:34:57.467 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:34:57.468 [2024-11-25 14:33:02.388143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.388187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:57.468 [2024-11-25 14:33:02.388516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.388547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:57.468 [2024-11-25 14:33:02.388802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.388835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:57.468 [2024-11-25 14:33:02.389064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.389095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.389512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.389544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.389910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.389942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.390286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.390317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.390642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.390673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.391015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.391047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.391285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.391317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.391700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.391730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.392095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.392133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.392525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.392557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.392894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.392926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.393293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.393325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.393545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.393574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.393825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.393859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.394224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.394256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.394505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.394536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.394892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.394922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.395280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.395311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.395682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.395713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.396025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.396054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.396271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.396302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.396657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.396688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.396921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.396951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.397330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.397361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.397728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.397757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.398124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.398152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.398554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.398584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.398960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.398991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.399238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.399270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.399493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.399523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.399756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.399785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.400124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.400157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.400540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.400570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.468 [2024-11-25 14:33:02.400949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.468 [2024-11-25 14:33:02.400979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.468 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.401199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.401231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.401623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.401653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.401904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.401937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.402332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.402364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.402724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.402754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.403112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.403141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.403510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.403540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.403786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.403815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.404187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.404218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.404559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.404589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.404953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.404984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.405244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.405274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.405607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.405636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.406004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.406033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.406375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.406412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.406742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.406772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.407137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.407177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.407425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.407456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.407678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.407708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.407962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.407990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.408426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.408458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.408679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.408713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.408945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.408973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.409333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.409363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.409724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.409754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.410109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.410138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.410566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.410596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.410950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.410979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.411334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.411366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.411741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.411770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.412036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.412064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.412412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.412443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.412820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.412848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.469 [2024-11-25 14:33:02.413224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.469 [2024-11-25 14:33:02.413256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.469 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.413640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.413669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.414043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.414073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.414459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.414488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.414870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.414900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.415269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.415300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.415667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.415697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.416065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.416095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.416413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.416445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.416825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.416854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.417214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.417245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.417604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.417634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.417973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.418001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.418391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.418422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.418769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.418799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.419173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.419204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.419569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.419598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.419960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.419989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.420327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.420358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.420735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.420765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.421145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.421186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.421550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.421586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.421949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.421977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.422346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.422377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.422591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.422621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.422975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.423005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.423389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.423420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.423629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.423661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.424027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.424058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.424304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.424334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.424701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.424729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.425117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.425146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.425419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.425449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.425701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.425730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.425981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.426009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.426381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.426413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.426757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.426786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.427149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.427189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.427548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.470 [2024-11-25 14:33:02.427578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.470 qpair failed and we were unable to recover it. 00:34:57.470 [2024-11-25 14:33:02.427955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.427983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.428364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.428395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.428748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.428778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.429152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.429191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:57.471 [2024-11-25 14:33:02.429520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.429550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:57.471 [2024-11-25 14:33:02.429811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.429843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.471 [2024-11-25 14:33:02.430192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.430227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:57.471 [2024-11-25 14:33:02.430670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.430699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.431054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.431083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.431467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.431498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.431749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.431778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.432169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.432198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.432564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.432593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.432953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.432981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.433345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.433375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.433601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.433630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.433995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.434024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.434384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.434415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.434780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.434810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.435184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.435214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.435580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.435615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.435842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.435871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.436116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.436147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.436511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.436541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.436911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.436939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.437215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.437245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.437609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.437638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.438006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.438035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.438383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.438414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.438624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.438653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.438905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.438937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.439335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.439366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.439738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.439766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.440090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.440118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.440468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.440499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.440877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.440906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.471 [2024-11-25 14:33:02.441197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.471 [2024-11-25 14:33:02.441227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.471 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.441641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.441670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.442034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.442062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.442472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.442503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.442865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.442894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.443278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.443309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.443642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.443671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.444059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.444088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.444458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.444489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.444840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.444868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.445239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.445269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.445661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.445690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.445902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.445931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.446302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.446332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.446638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.446667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.447030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.447059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.447397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.447428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.447784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.447813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.448169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.448199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.448582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.448611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.448987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.449016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.449429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.449459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.449838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.449866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.450085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.450114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.450533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.450570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.450773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.450802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.451182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.451213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.451554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.451583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.451931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.451960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.452302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.452333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.452431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.452460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f70ac000b90 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.452904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.453014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.453326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.453367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.453720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.453755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.454004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.454035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.454337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.454370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.454774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.454803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.455182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.455214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.455358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.472 [2024-11-25 14:33:02.455391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.472 qpair failed and we were unable to recover it. 00:34:57.472 [2024-11-25 14:33:02.455714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.455744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.456003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.456032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.456424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.456455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.456788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.456819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.457033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.457061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.457458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.457489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.457824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.457853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.458204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.458235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.458586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.458616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.458898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.458932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.459255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.459286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.459549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.459577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.459790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.459820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.460219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.460251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.460617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.460646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.460877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.460906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.461146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.461191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.461593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.461622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.461830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.461859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.462233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.462264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.462510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.462539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.462890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.462921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.463302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.463333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.463457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.463488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.463732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.463763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.464021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.464053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.464430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.464462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.464830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.464860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.465196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.465226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.465653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.465682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.466040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.466071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.466422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.466453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.466813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.466842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.467090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.467123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.467547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.467579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 Malloc0 00:34:57.473 [2024-11-25 14:33:02.467956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.467987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.468369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.468400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.473 [2024-11-25 14:33:02.468747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.468777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.473 qpair failed and we were unable to recover it. 00:34:57.473 [2024-11-25 14:33:02.468921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.473 [2024-11-25 14:33:02.468954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.474 [2024-11-25 14:33:02.469368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.469400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:57.474 [2024-11-25 14:33:02.469766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.469797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.470137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.470179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.470496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.470525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.470899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.470928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.471095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.471125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.471587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.471618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.471992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.472023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.472395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.472427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.472797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.472826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.473203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.473234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.473535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.473564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.473798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.473829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.474216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.474247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.474609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.474639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.474931] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:57.474 [2024-11-25 14:33:02.475001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.475030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.475367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.475400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.475759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.475788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.476019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.476048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.476419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.476451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.476825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.476855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.476957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.476985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.477240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.477271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.477651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.477680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.478051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.478080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.478285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.478315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.478553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.478583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.478963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.478992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.479228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.479261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.479650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.479680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.480023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.480052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.474 qpair failed and we were unable to recover it. 00:34:57.474 [2024-11-25 14:33:02.480415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.474 [2024-11-25 14:33:02.480446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.480783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.480813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.481178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.481208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.481543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.481573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.481981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.482011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.482389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.482421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.482777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.482807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.483132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.483185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.483663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.483699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.475 [2024-11-25 14:33:02.483932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.483963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:57.475 [2024-11-25 14:33:02.484330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.484361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.475 [2024-11-25 14:33:02.484704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.484734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:57.475 [2024-11-25 14:33:02.485106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.485136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.485461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.485491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.485880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.485909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.486279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.486309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.486672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.486702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.486977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.487006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.487426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.487458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.487641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.487671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.488035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.488065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.488305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.488340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.488704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.488735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.489101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.489130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.489544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.489575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.489904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.489934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.490192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.490223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.490590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.490619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.490974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.491004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.491343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.491374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.491738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.491767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.491983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.492013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.492399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.492431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.492664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.492694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.493068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.493099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.493333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.493364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.493684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.493713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.475 qpair failed and we were unable to recover it. 00:34:57.475 [2024-11-25 14:33:02.494089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.475 [2024-11-25 14:33:02.494120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.494344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.494375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.494746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.494777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.495136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.495175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.495613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.495642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.495999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.496028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.476 [2024-11-25 14:33:02.496301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.496334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:57.476 [2024-11-25 14:33:02.496656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.496686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.476 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:57.476 [2024-11-25 14:33:02.497061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.497098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.497456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.497489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.497869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.497899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.498276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.498306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.498685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.498715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.498998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.499027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.499409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.499439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.499781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.499812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.500172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.500205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.500541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.500571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.500802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.500831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.501202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.501233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.501565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.501595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.501965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.501994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.502235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.502268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.502664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.502693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.503043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.503073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.503461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.503492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.503839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.503870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.504090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.504120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.504426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.504459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.504817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.504848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.505208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.505265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.505652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.505683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.506034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.506066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.506451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.506483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.506701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.506732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.507059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.507093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.507466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.476 [2024-11-25 14:33:02.507500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.476 qpair failed and we were unable to recover it. 00:34:57.476 [2024-11-25 14:33:02.507873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.477 [2024-11-25 14:33:02.507904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.477 qpair failed and we were unable to recover it. 00:34:57.477 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.477 [2024-11-25 14:33:02.508285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.477 [2024-11-25 14:33:02.508318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.477 qpair failed and we were unable to recover it. 00:34:57.477 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:57.477 [2024-11-25 14:33:02.508668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.477 [2024-11-25 14:33:02.508699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.477 qpair failed and we were unable to recover it. 00:34:57.477 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.477 [2024-11-25 14:33:02.509062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.477 [2024-11-25 14:33:02.509094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.477 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:57.477 qpair failed and we were unable to recover it. 00:34:57.477 [2024-11-25 14:33:02.509319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.477 [2024-11-25 14:33:02.509352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.477 qpair failed and we were unable to recover it. 00:34:57.477 [2024-11-25 14:33:02.509725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.477 [2024-11-25 14:33:02.509756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.477 qpair failed and we were unable to recover it. 00:34:57.477 [2024-11-25 14:33:02.509994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.477 [2024-11-25 14:33:02.510024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.477 qpair failed and we were unable to recover it. 00:34:57.477 [2024-11-25 14:33:02.510318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.477 [2024-11-25 14:33:02.510351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.477 qpair failed and we were unable to recover it. 00:34:57.477 [2024-11-25 14:33:02.510580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.477 [2024-11-25 14:33:02.510610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.477 qpair failed and we were unable to recover it. 00:34:57.477 [2024-11-25 14:33:02.510962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.477 [2024-11-25 14:33:02.510993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.477 qpair failed and we were unable to recover it. 00:34:57.477 [2024-11-25 14:33:02.511358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.477 [2024-11-25 14:33:02.511397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.477 qpair failed and we were unable to recover it. 00:34:57.477 [2024-11-25 14:33:02.511605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.477 [2024-11-25 14:33:02.511636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.477 qpair failed and we were unable to recover it. 00:34:57.477 [2024-11-25 14:33:02.512019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.477 [2024-11-25 14:33:02.512050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.477 qpair failed and we were unable to recover it. 00:34:57.477 [2024-11-25 14:33:02.512424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.477 [2024-11-25 14:33:02.512457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.477 qpair failed and we were unable to recover it. 00:34:57.477 [2024-11-25 14:33:02.512851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.477 [2024-11-25 14:33:02.512881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.477 qpair failed and we were unable to recover it. 00:34:57.477 [2024-11-25 14:33:02.513246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.477 [2024-11-25 14:33:02.513278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.477 qpair failed and we were unable to recover it. 00:34:57.477 [2024-11-25 14:33:02.513645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.477 [2024-11-25 14:33:02.513675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.477 qpair failed and we were unable to recover it. 00:34:57.477 [2024-11-25 14:33:02.514025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.477 [2024-11-25 14:33:02.514055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.477 qpair failed and we were unable to recover it. 00:34:57.477 [2024-11-25 14:33:02.514450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.477 [2024-11-25 14:33:02.514482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.477 qpair failed and we were unable to recover it. 00:34:57.477 [2024-11-25 14:33:02.514729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.477 [2024-11-25 14:33:02.514759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.477 qpair failed and we were unable to recover it. 00:34:57.477 [2024-11-25 14:33:02.515112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:57.477 [2024-11-25 14:33:02.515142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae20c0 with addr=10.0.0.2, port=4420 00:34:57.477 qpair failed and we were unable to recover it. 00:34:57.477 [2024-11-25 14:33:02.515345] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:57.477 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.477 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:57.477 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.477 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:57.477 [2024-11-25 14:33:02.526280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.477 [2024-11-25 14:33:02.526424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.477 [2024-11-25 14:33:02.526487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.477 [2024-11-25 14:33:02.526511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.477 [2024-11-25 14:33:02.526533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.477 [2024-11-25 14:33:02.526590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.477 qpair failed and we were unable to recover it. 00:34:57.740 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.740 14:33:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3624505 00:34:57.741 [2024-11-25 14:33:02.536146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.741 [2024-11-25 14:33:02.536252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.741 [2024-11-25 14:33:02.536284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.741 [2024-11-25 14:33:02.536301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.741 [2024-11-25 14:33:02.536318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.741 [2024-11-25 14:33:02.536352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.741 qpair failed and we were unable to recover it. 00:34:57.741 [2024-11-25 14:33:02.546006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.741 [2024-11-25 14:33:02.546079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.741 [2024-11-25 14:33:02.546101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.741 [2024-11-25 14:33:02.546112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.741 [2024-11-25 14:33:02.546121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.741 [2024-11-25 14:33:02.546143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.741 qpair failed and we were unable to recover it. 00:34:57.741 [2024-11-25 14:33:02.556100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.741 [2024-11-25 14:33:02.556182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.741 [2024-11-25 14:33:02.556200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.741 [2024-11-25 14:33:02.556207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.741 [2024-11-25 14:33:02.556214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.741 [2024-11-25 14:33:02.556231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.741 qpair failed and we were unable to recover it. 00:34:57.741 [2024-11-25 14:33:02.566088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.741 [2024-11-25 14:33:02.566170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.741 [2024-11-25 14:33:02.566193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.741 [2024-11-25 14:33:02.566202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.741 [2024-11-25 14:33:02.566208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.741 [2024-11-25 14:33:02.566225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.741 qpair failed and we were unable to recover it. 00:34:57.741 [2024-11-25 14:33:02.576095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.741 [2024-11-25 14:33:02.576174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.741 [2024-11-25 14:33:02.576192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.741 [2024-11-25 14:33:02.576201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.741 [2024-11-25 14:33:02.576207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.741 [2024-11-25 14:33:02.576224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.741 qpair failed and we were unable to recover it. 00:34:57.741 [2024-11-25 14:33:02.586070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.741 [2024-11-25 14:33:02.586141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.741 [2024-11-25 14:33:02.586168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.741 [2024-11-25 14:33:02.586177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.741 [2024-11-25 14:33:02.586184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.741 [2024-11-25 14:33:02.586201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.741 qpair failed and we were unable to recover it. 00:34:57.741 [2024-11-25 14:33:02.596139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.741 [2024-11-25 14:33:02.596217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.741 [2024-11-25 14:33:02.596234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.741 [2024-11-25 14:33:02.596242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.741 [2024-11-25 14:33:02.596249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.741 [2024-11-25 14:33:02.596266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.741 qpair failed and we were unable to recover it. 00:34:57.741 [2024-11-25 14:33:02.606246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.741 [2024-11-25 14:33:02.606317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.741 [2024-11-25 14:33:02.606334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.741 [2024-11-25 14:33:02.606342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.741 [2024-11-25 14:33:02.606354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.741 [2024-11-25 14:33:02.606371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.741 qpair failed and we were unable to recover it. 00:34:57.741 [2024-11-25 14:33:02.616097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.741 [2024-11-25 14:33:02.616163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.741 [2024-11-25 14:33:02.616180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.741 [2024-11-25 14:33:02.616188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.741 [2024-11-25 14:33:02.616194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.741 [2024-11-25 14:33:02.616211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.741 qpair failed and we were unable to recover it. 00:34:57.741 [2024-11-25 14:33:02.626277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.741 [2024-11-25 14:33:02.626365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.741 [2024-11-25 14:33:02.626383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.741 [2024-11-25 14:33:02.626390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.741 [2024-11-25 14:33:02.626397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.741 [2024-11-25 14:33:02.626415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.741 qpair failed and we were unable to recover it. 00:34:57.741 [2024-11-25 14:33:02.636275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.741 [2024-11-25 14:33:02.636348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.741 [2024-11-25 14:33:02.636365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.741 [2024-11-25 14:33:02.636373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.741 [2024-11-25 14:33:02.636380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.741 [2024-11-25 14:33:02.636397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.741 qpair failed and we were unable to recover it. 00:34:57.741 [2024-11-25 14:33:02.646316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.741 [2024-11-25 14:33:02.646390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.741 [2024-11-25 14:33:02.646407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.741 [2024-11-25 14:33:02.646415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.741 [2024-11-25 14:33:02.646421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.741 [2024-11-25 14:33:02.646438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.741 qpair failed and we were unable to recover it. 00:34:57.741 [2024-11-25 14:33:02.656330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.741 [2024-11-25 14:33:02.656394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.741 [2024-11-25 14:33:02.656411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.741 [2024-11-25 14:33:02.656418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.741 [2024-11-25 14:33:02.656425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.741 [2024-11-25 14:33:02.656442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.741 qpair failed and we were unable to recover it. 00:34:57.741 [2024-11-25 14:33:02.666332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.741 [2024-11-25 14:33:02.666426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.741 [2024-11-25 14:33:02.666443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.742 [2024-11-25 14:33:02.666450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.742 [2024-11-25 14:33:02.666457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.742 [2024-11-25 14:33:02.666474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.742 qpair failed and we were unable to recover it. 00:34:57.742 [2024-11-25 14:33:02.676276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.742 [2024-11-25 14:33:02.676360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.742 [2024-11-25 14:33:02.676376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.742 [2024-11-25 14:33:02.676384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.742 [2024-11-25 14:33:02.676390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.742 [2024-11-25 14:33:02.676407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.742 qpair failed and we were unable to recover it. 00:34:57.742 [2024-11-25 14:33:02.686469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.742 [2024-11-25 14:33:02.686594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.742 [2024-11-25 14:33:02.686623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.742 [2024-11-25 14:33:02.686635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.742 [2024-11-25 14:33:02.686642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.742 [2024-11-25 14:33:02.686662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.742 qpair failed and we were unable to recover it. 00:34:57.742 [2024-11-25 14:33:02.696339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.742 [2024-11-25 14:33:02.696400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.742 [2024-11-25 14:33:02.696427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.742 [2024-11-25 14:33:02.696435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.742 [2024-11-25 14:33:02.696441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.742 [2024-11-25 14:33:02.696460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.742 qpair failed and we were unable to recover it. 00:34:57.742 [2024-11-25 14:33:02.706461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.742 [2024-11-25 14:33:02.706524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.742 [2024-11-25 14:33:02.706543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.742 [2024-11-25 14:33:02.706551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.742 [2024-11-25 14:33:02.706557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.742 [2024-11-25 14:33:02.706574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.742 qpair failed and we were unable to recover it. 00:34:57.742 [2024-11-25 14:33:02.716498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.742 [2024-11-25 14:33:02.716567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.742 [2024-11-25 14:33:02.716585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.742 [2024-11-25 14:33:02.716592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.742 [2024-11-25 14:33:02.716599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.742 [2024-11-25 14:33:02.716616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.742 qpair failed and we were unable to recover it. 00:34:57.742 [2024-11-25 14:33:02.726444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.742 [2024-11-25 14:33:02.726519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.742 [2024-11-25 14:33:02.726536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.742 [2024-11-25 14:33:02.726544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.742 [2024-11-25 14:33:02.726550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.742 [2024-11-25 14:33:02.726567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.742 qpair failed and we were unable to recover it. 00:34:57.742 [2024-11-25 14:33:02.736743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.742 [2024-11-25 14:33:02.736823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.742 [2024-11-25 14:33:02.736840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.742 [2024-11-25 14:33:02.736848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.742 [2024-11-25 14:33:02.736860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.742 [2024-11-25 14:33:02.736878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.742 qpair failed and we were unable to recover it. 00:34:57.742 [2024-11-25 14:33:02.746648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.742 [2024-11-25 14:33:02.746716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.742 [2024-11-25 14:33:02.746735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.742 [2024-11-25 14:33:02.746742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.742 [2024-11-25 14:33:02.746749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.742 [2024-11-25 14:33:02.746766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.742 qpair failed and we were unable to recover it. 00:34:57.742 [2024-11-25 14:33:02.756676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.742 [2024-11-25 14:33:02.756746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.742 [2024-11-25 14:33:02.756762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.742 [2024-11-25 14:33:02.756770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.742 [2024-11-25 14:33:02.756776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.742 [2024-11-25 14:33:02.756793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.742 qpair failed and we were unable to recover it. 00:34:57.742 [2024-11-25 14:33:02.766735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.742 [2024-11-25 14:33:02.766803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.742 [2024-11-25 14:33:02.766820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.742 [2024-11-25 14:33:02.766827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.742 [2024-11-25 14:33:02.766834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.742 [2024-11-25 14:33:02.766850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.742 qpair failed and we were unable to recover it. 00:34:57.742 [2024-11-25 14:33:02.776660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.742 [2024-11-25 14:33:02.776722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.742 [2024-11-25 14:33:02.776738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.742 [2024-11-25 14:33:02.776746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.742 [2024-11-25 14:33:02.776752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.742 [2024-11-25 14:33:02.776769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.742 qpair failed and we were unable to recover it. 00:34:57.742 [2024-11-25 14:33:02.786723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.742 [2024-11-25 14:33:02.786782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.742 [2024-11-25 14:33:02.786800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.742 [2024-11-25 14:33:02.786807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.742 [2024-11-25 14:33:02.786814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.742 [2024-11-25 14:33:02.786831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.742 qpair failed and we were unable to recover it. 00:34:57.742 [2024-11-25 14:33:02.796720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.742 [2024-11-25 14:33:02.796786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.742 [2024-11-25 14:33:02.796804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.742 [2024-11-25 14:33:02.796812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.743 [2024-11-25 14:33:02.796818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.743 [2024-11-25 14:33:02.796835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.743 qpair failed and we were unable to recover it. 00:34:57.743 [2024-11-25 14:33:02.806811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.743 [2024-11-25 14:33:02.806923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.743 [2024-11-25 14:33:02.806939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.743 [2024-11-25 14:33:02.806947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.743 [2024-11-25 14:33:02.806953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.743 [2024-11-25 14:33:02.806970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.743 qpair failed and we were unable to recover it. 00:34:57.743 [2024-11-25 14:33:02.816700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.743 [2024-11-25 14:33:02.816803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.743 [2024-11-25 14:33:02.816841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.743 [2024-11-25 14:33:02.816851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.743 [2024-11-25 14:33:02.816858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.743 [2024-11-25 14:33:02.816883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.743 qpair failed and we were unable to recover it. 00:34:57.743 [2024-11-25 14:33:02.826801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:57.743 [2024-11-25 14:33:02.826868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:57.743 [2024-11-25 14:33:02.826914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:57.743 [2024-11-25 14:33:02.826924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:57.743 [2024-11-25 14:33:02.826931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:57.743 [2024-11-25 14:33:02.826956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:57.743 qpair failed and we were unable to recover it. 00:34:58.006 [2024-11-25 14:33:02.836841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.006 [2024-11-25 14:33:02.836924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.006 [2024-11-25 14:33:02.836961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.006 [2024-11-25 14:33:02.836972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.006 [2024-11-25 14:33:02.836980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.007 [2024-11-25 14:33:02.837004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.007 qpair failed and we were unable to recover it. 00:34:58.007 [2024-11-25 14:33:02.846777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.007 [2024-11-25 14:33:02.846845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.007 [2024-11-25 14:33:02.846866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.007 [2024-11-25 14:33:02.846874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.007 [2024-11-25 14:33:02.846880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.007 [2024-11-25 14:33:02.846899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.007 qpair failed and we were unable to recover it. 00:34:58.007 [2024-11-25 14:33:02.856915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.007 [2024-11-25 14:33:02.856983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.007 [2024-11-25 14:33:02.857002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.007 [2024-11-25 14:33:02.857010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.007 [2024-11-25 14:33:02.857016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.007 [2024-11-25 14:33:02.857034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.007 qpair failed and we were unable to recover it. 00:34:58.007 [2024-11-25 14:33:02.866840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.007 [2024-11-25 14:33:02.866921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.007 [2024-11-25 14:33:02.866941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.007 [2024-11-25 14:33:02.866949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.007 [2024-11-25 14:33:02.866963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.007 [2024-11-25 14:33:02.866981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.007 qpair failed and we were unable to recover it. 00:34:58.007 [2024-11-25 14:33:02.876935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.007 [2024-11-25 14:33:02.877038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.007 [2024-11-25 14:33:02.877054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.007 [2024-11-25 14:33:02.877062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.007 [2024-11-25 14:33:02.877069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.007 [2024-11-25 14:33:02.877086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.007 qpair failed and we were unable to recover it. 00:34:58.007 [2024-11-25 14:33:02.887028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.007 [2024-11-25 14:33:02.887105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.007 [2024-11-25 14:33:02.887123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.007 [2024-11-25 14:33:02.887131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.007 [2024-11-25 14:33:02.887137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.007 [2024-11-25 14:33:02.887154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.007 qpair failed and we were unable to recover it. 00:34:58.007 [2024-11-25 14:33:02.897041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.007 [2024-11-25 14:33:02.897101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.007 [2024-11-25 14:33:02.897118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.007 [2024-11-25 14:33:02.897126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.007 [2024-11-25 14:33:02.897132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.007 [2024-11-25 14:33:02.897149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.007 qpair failed and we were unable to recover it. 00:34:58.007 [2024-11-25 14:33:02.906929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.007 [2024-11-25 14:33:02.906991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.007 [2024-11-25 14:33:02.907009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.007 [2024-11-25 14:33:02.907016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.007 [2024-11-25 14:33:02.907023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.007 [2024-11-25 14:33:02.907039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.007 qpair failed and we were unable to recover it. 00:34:58.007 [2024-11-25 14:33:02.917077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.007 [2024-11-25 14:33:02.917146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.007 [2024-11-25 14:33:02.917170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.007 [2024-11-25 14:33:02.917178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.007 [2024-11-25 14:33:02.917185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.007 [2024-11-25 14:33:02.917202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.007 qpair failed and we were unable to recover it. 00:34:58.007 [2024-11-25 14:33:02.927199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.007 [2024-11-25 14:33:02.927286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.007 [2024-11-25 14:33:02.927303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.007 [2024-11-25 14:33:02.927311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.007 [2024-11-25 14:33:02.927317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.007 [2024-11-25 14:33:02.927334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.007 qpair failed and we were unable to recover it. 00:34:58.007 [2024-11-25 14:33:02.937105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.007 [2024-11-25 14:33:02.937183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.007 [2024-11-25 14:33:02.937203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.007 [2024-11-25 14:33:02.937211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.007 [2024-11-25 14:33:02.937218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.007 [2024-11-25 14:33:02.937235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.007 qpair failed and we were unable to recover it. 00:34:58.007 [2024-11-25 14:33:02.947222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.007 [2024-11-25 14:33:02.947298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.007 [2024-11-25 14:33:02.947314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.007 [2024-11-25 14:33:02.947322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.007 [2024-11-25 14:33:02.947328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.007 [2024-11-25 14:33:02.947346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.007 qpair failed and we were unable to recover it. 00:34:58.007 [2024-11-25 14:33:02.957237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.007 [2024-11-25 14:33:02.957335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.007 [2024-11-25 14:33:02.957356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.007 [2024-11-25 14:33:02.957364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.007 [2024-11-25 14:33:02.957371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.007 [2024-11-25 14:33:02.957387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.007 qpair failed and we were unable to recover it. 00:34:58.007 [2024-11-25 14:33:02.967247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.007 [2024-11-25 14:33:02.967321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.007 [2024-11-25 14:33:02.967338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.007 [2024-11-25 14:33:02.967345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.008 [2024-11-25 14:33:02.967351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.008 [2024-11-25 14:33:02.967368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.008 qpair failed and we were unable to recover it. 00:34:58.008 [2024-11-25 14:33:02.977153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.008 [2024-11-25 14:33:02.977227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.008 [2024-11-25 14:33:02.977245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.008 [2024-11-25 14:33:02.977252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.008 [2024-11-25 14:33:02.977259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.008 [2024-11-25 14:33:02.977275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.008 qpair failed and we were unable to recover it. 00:34:58.008 [2024-11-25 14:33:02.987332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.008 [2024-11-25 14:33:02.987403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.008 [2024-11-25 14:33:02.987420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.008 [2024-11-25 14:33:02.987428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.008 [2024-11-25 14:33:02.987434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.008 [2024-11-25 14:33:02.987452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.008 qpair failed and we were unable to recover it. 00:34:58.008 [2024-11-25 14:33:02.997305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.008 [2024-11-25 14:33:02.997382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.008 [2024-11-25 14:33:02.997404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.008 [2024-11-25 14:33:02.997415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.008 [2024-11-25 14:33:02.997428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.008 [2024-11-25 14:33:02.997446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.008 qpair failed and we were unable to recover it. 00:34:58.008 [2024-11-25 14:33:03.007420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.008 [2024-11-25 14:33:03.007493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.008 [2024-11-25 14:33:03.007511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.008 [2024-11-25 14:33:03.007519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.008 [2024-11-25 14:33:03.007526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.008 [2024-11-25 14:33:03.007544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.008 qpair failed and we were unable to recover it. 00:34:58.008 [2024-11-25 14:33:03.017318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.008 [2024-11-25 14:33:03.017381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.008 [2024-11-25 14:33:03.017398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.008 [2024-11-25 14:33:03.017406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.008 [2024-11-25 14:33:03.017412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.008 [2024-11-25 14:33:03.017429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.008 qpair failed and we were unable to recover it. 00:34:58.008 [2024-11-25 14:33:03.027397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.008 [2024-11-25 14:33:03.027468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.008 [2024-11-25 14:33:03.027484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.008 [2024-11-25 14:33:03.027491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.008 [2024-11-25 14:33:03.027498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.008 [2024-11-25 14:33:03.027514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.008 qpair failed and we were unable to recover it. 00:34:58.008 [2024-11-25 14:33:03.037424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.008 [2024-11-25 14:33:03.037493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.008 [2024-11-25 14:33:03.037511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.008 [2024-11-25 14:33:03.037519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.008 [2024-11-25 14:33:03.037525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.008 [2024-11-25 14:33:03.037542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.008 qpair failed and we were unable to recover it. 00:34:58.008 [2024-11-25 14:33:03.047537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.008 [2024-11-25 14:33:03.047605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.008 [2024-11-25 14:33:03.047622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.008 [2024-11-25 14:33:03.047630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.008 [2024-11-25 14:33:03.047637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.008 [2024-11-25 14:33:03.047654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.008 qpair failed and we were unable to recover it. 00:34:58.008 [2024-11-25 14:33:03.057490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.008 [2024-11-25 14:33:03.057552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.008 [2024-11-25 14:33:03.057568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.008 [2024-11-25 14:33:03.057576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.008 [2024-11-25 14:33:03.057583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.008 [2024-11-25 14:33:03.057599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.008 qpair failed and we were unable to recover it. 00:34:58.008 [2024-11-25 14:33:03.067554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.008 [2024-11-25 14:33:03.067648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.008 [2024-11-25 14:33:03.067666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.008 [2024-11-25 14:33:03.067674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.008 [2024-11-25 14:33:03.067681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.008 [2024-11-25 14:33:03.067697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.008 qpair failed and we were unable to recover it. 00:34:58.008 [2024-11-25 14:33:03.077578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.008 [2024-11-25 14:33:03.077641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.008 [2024-11-25 14:33:03.077658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.008 [2024-11-25 14:33:03.077666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.008 [2024-11-25 14:33:03.077673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.008 [2024-11-25 14:33:03.077690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.008 qpair failed and we were unable to recover it. 00:34:58.008 [2024-11-25 14:33:03.087651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.008 [2024-11-25 14:33:03.087722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.008 [2024-11-25 14:33:03.087750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.008 [2024-11-25 14:33:03.087757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.008 [2024-11-25 14:33:03.087764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.008 [2024-11-25 14:33:03.087783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.008 qpair failed and we were unable to recover it. 00:34:58.271 [2024-11-25 14:33:03.097657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.271 [2024-11-25 14:33:03.097718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.271 [2024-11-25 14:33:03.097735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.271 [2024-11-25 14:33:03.097743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.271 [2024-11-25 14:33:03.097750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.271 [2024-11-25 14:33:03.097767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.271 qpair failed and we were unable to recover it. 00:34:58.271 [2024-11-25 14:33:03.107666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.271 [2024-11-25 14:33:03.107732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.271 [2024-11-25 14:33:03.107749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.271 [2024-11-25 14:33:03.107757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.272 [2024-11-25 14:33:03.107764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.272 [2024-11-25 14:33:03.107781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.272 qpair failed and we were unable to recover it. 00:34:58.272 [2024-11-25 14:33:03.117698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.272 [2024-11-25 14:33:03.117768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.272 [2024-11-25 14:33:03.117785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.272 [2024-11-25 14:33:03.117793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.272 [2024-11-25 14:33:03.117799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.272 [2024-11-25 14:33:03.117816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.272 qpair failed and we were unable to recover it. 00:34:58.272 [2024-11-25 14:33:03.127763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.272 [2024-11-25 14:33:03.127844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.272 [2024-11-25 14:33:03.127863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.272 [2024-11-25 14:33:03.127870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.272 [2024-11-25 14:33:03.127883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.272 [2024-11-25 14:33:03.127901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.272 qpair failed and we were unable to recover it. 00:34:58.272 [2024-11-25 14:33:03.137805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.272 [2024-11-25 14:33:03.137867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.272 [2024-11-25 14:33:03.137884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.272 [2024-11-25 14:33:03.137891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.272 [2024-11-25 14:33:03.137898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.272 [2024-11-25 14:33:03.137914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.272 qpair failed and we were unable to recover it. 00:34:58.272 [2024-11-25 14:33:03.147791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.272 [2024-11-25 14:33:03.147857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.272 [2024-11-25 14:33:03.147894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.272 [2024-11-25 14:33:03.147904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.272 [2024-11-25 14:33:03.147911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.272 [2024-11-25 14:33:03.147936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.272 qpair failed and we were unable to recover it. 00:34:58.272 [2024-11-25 14:33:03.157819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.272 [2024-11-25 14:33:03.157894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.272 [2024-11-25 14:33:03.157931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.272 [2024-11-25 14:33:03.157941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.272 [2024-11-25 14:33:03.157950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.272 [2024-11-25 14:33:03.157974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.272 qpair failed and we were unable to recover it. 00:34:58.272 [2024-11-25 14:33:03.167879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.272 [2024-11-25 14:33:03.167963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.272 [2024-11-25 14:33:03.168001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.272 [2024-11-25 14:33:03.168011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.272 [2024-11-25 14:33:03.168018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.272 [2024-11-25 14:33:03.168042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.272 qpair failed and we were unable to recover it. 00:34:58.272 [2024-11-25 14:33:03.177803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.272 [2024-11-25 14:33:03.177881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.272 [2024-11-25 14:33:03.177902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.272 [2024-11-25 14:33:03.177909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.272 [2024-11-25 14:33:03.177916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.272 [2024-11-25 14:33:03.177934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.272 qpair failed and we were unable to recover it. 00:34:58.272 [2024-11-25 14:33:03.187904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.272 [2024-11-25 14:33:03.187980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.272 [2024-11-25 14:33:03.187998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.272 [2024-11-25 14:33:03.188007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.272 [2024-11-25 14:33:03.188016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.272 [2024-11-25 14:33:03.188034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.272 qpair failed and we were unable to recover it. 00:34:58.272 [2024-11-25 14:33:03.197969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.272 [2024-11-25 14:33:03.198041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.272 [2024-11-25 14:33:03.198057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.272 [2024-11-25 14:33:03.198065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.272 [2024-11-25 14:33:03.198072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.272 [2024-11-25 14:33:03.198088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.272 qpair failed and we were unable to recover it. 00:34:58.272 [2024-11-25 14:33:03.207992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.272 [2024-11-25 14:33:03.208078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.272 [2024-11-25 14:33:03.208098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.272 [2024-11-25 14:33:03.208106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.272 [2024-11-25 14:33:03.208113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.272 [2024-11-25 14:33:03.208130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.273 qpair failed and we were unable to recover it. 00:34:58.273 [2024-11-25 14:33:03.217993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.273 [2024-11-25 14:33:03.218071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.273 [2024-11-25 14:33:03.218094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.273 [2024-11-25 14:33:03.218102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.273 [2024-11-25 14:33:03.218108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.273 [2024-11-25 14:33:03.218125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.273 qpair failed and we were unable to recover it. 00:34:58.273 [2024-11-25 14:33:03.228023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.273 [2024-11-25 14:33:03.228090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.273 [2024-11-25 14:33:03.228107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.273 [2024-11-25 14:33:03.228115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.273 [2024-11-25 14:33:03.228121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.273 [2024-11-25 14:33:03.228137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.273 qpair failed and we were unable to recover it. 00:34:58.273 [2024-11-25 14:33:03.238086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.273 [2024-11-25 14:33:03.238189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.273 [2024-11-25 14:33:03.238208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.273 [2024-11-25 14:33:03.238216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.273 [2024-11-25 14:33:03.238223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.273 [2024-11-25 14:33:03.238239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.273 qpair failed and we were unable to recover it. 00:34:58.273 [2024-11-25 14:33:03.248138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.273 [2024-11-25 14:33:03.248258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.273 [2024-11-25 14:33:03.248276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.273 [2024-11-25 14:33:03.248283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.273 [2024-11-25 14:33:03.248290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.273 [2024-11-25 14:33:03.248307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.273 qpair failed and we were unable to recover it. 00:34:58.273 [2024-11-25 14:33:03.258131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.273 [2024-11-25 14:33:03.258208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.273 [2024-11-25 14:33:03.258225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.273 [2024-11-25 14:33:03.258233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.273 [2024-11-25 14:33:03.258244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.273 [2024-11-25 14:33:03.258262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.273 qpair failed and we were unable to recover it. 00:34:58.273 [2024-11-25 14:33:03.268151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.273 [2024-11-25 14:33:03.268232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.273 [2024-11-25 14:33:03.268248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.273 [2024-11-25 14:33:03.268256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.273 [2024-11-25 14:33:03.268262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.273 [2024-11-25 14:33:03.268278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.273 qpair failed and we were unable to recover it. 00:34:58.273 [2024-11-25 14:33:03.278188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.273 [2024-11-25 14:33:03.278267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.273 [2024-11-25 14:33:03.278284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.273 [2024-11-25 14:33:03.278291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.273 [2024-11-25 14:33:03.278298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.273 [2024-11-25 14:33:03.278315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.273 qpair failed and we were unable to recover it. 00:34:58.273 [2024-11-25 14:33:03.288223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.273 [2024-11-25 14:33:03.288330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.273 [2024-11-25 14:33:03.288348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.273 [2024-11-25 14:33:03.288356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.273 [2024-11-25 14:33:03.288363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.273 [2024-11-25 14:33:03.288380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.273 qpair failed and we were unable to recover it. 00:34:58.273 [2024-11-25 14:33:03.298278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.273 [2024-11-25 14:33:03.298348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.273 [2024-11-25 14:33:03.298365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.273 [2024-11-25 14:33:03.298373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.273 [2024-11-25 14:33:03.298380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.273 [2024-11-25 14:33:03.298396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.273 qpair failed and we were unable to recover it. 00:34:58.273 [2024-11-25 14:33:03.308267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.273 [2024-11-25 14:33:03.308324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.273 [2024-11-25 14:33:03.308343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.273 [2024-11-25 14:33:03.308351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.273 [2024-11-25 14:33:03.308357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.273 [2024-11-25 14:33:03.308375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.273 qpair failed and we were unable to recover it. 00:34:58.273 [2024-11-25 14:33:03.318317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.273 [2024-11-25 14:33:03.318388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.273 [2024-11-25 14:33:03.318406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.273 [2024-11-25 14:33:03.318414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.273 [2024-11-25 14:33:03.318420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.273 [2024-11-25 14:33:03.318437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.273 qpair failed and we were unable to recover it. 00:34:58.273 [2024-11-25 14:33:03.328368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.273 [2024-11-25 14:33:03.328464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.273 [2024-11-25 14:33:03.328481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.273 [2024-11-25 14:33:03.328488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.273 [2024-11-25 14:33:03.328495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.273 [2024-11-25 14:33:03.328512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.273 qpair failed and we were unable to recover it. 00:34:58.273 [2024-11-25 14:33:03.338374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.273 [2024-11-25 14:33:03.338443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.273 [2024-11-25 14:33:03.338460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.273 [2024-11-25 14:33:03.338468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.273 [2024-11-25 14:33:03.338475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.273 [2024-11-25 14:33:03.338492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.273 qpair failed and we were unable to recover it. 00:34:58.274 [2024-11-25 14:33:03.348475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.274 [2024-11-25 14:33:03.348556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.274 [2024-11-25 14:33:03.348578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.274 [2024-11-25 14:33:03.348586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.274 [2024-11-25 14:33:03.348592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.274 [2024-11-25 14:33:03.348609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.274 qpair failed and we were unable to recover it. 00:34:58.274 [2024-11-25 14:33:03.358494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.274 [2024-11-25 14:33:03.358577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.274 [2024-11-25 14:33:03.358594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.274 [2024-11-25 14:33:03.358602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.274 [2024-11-25 14:33:03.358608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.537 [2024-11-25 14:33:03.358624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.537 qpair failed and we were unable to recover it. 00:34:58.537 [2024-11-25 14:33:03.368520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.537 [2024-11-25 14:33:03.368592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.537 [2024-11-25 14:33:03.368609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.537 [2024-11-25 14:33:03.368616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.537 [2024-11-25 14:33:03.368623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.537 [2024-11-25 14:33:03.368639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.537 qpair failed and we were unable to recover it. 00:34:58.537 [2024-11-25 14:33:03.378384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.537 [2024-11-25 14:33:03.378449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.537 [2024-11-25 14:33:03.378466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.537 [2024-11-25 14:33:03.378474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.537 [2024-11-25 14:33:03.378481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.537 [2024-11-25 14:33:03.378497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.537 qpair failed and we were unable to recover it. 00:34:58.537 [2024-11-25 14:33:03.388531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.537 [2024-11-25 14:33:03.388603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.537 [2024-11-25 14:33:03.388621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.537 [2024-11-25 14:33:03.388629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.537 [2024-11-25 14:33:03.388641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.537 [2024-11-25 14:33:03.388658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.537 qpair failed and we were unable to recover it. 00:34:58.537 [2024-11-25 14:33:03.398605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.537 [2024-11-25 14:33:03.398724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.537 [2024-11-25 14:33:03.398741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.537 [2024-11-25 14:33:03.398749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.537 [2024-11-25 14:33:03.398756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.537 [2024-11-25 14:33:03.398772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.537 qpair failed and we were unable to recover it. 00:34:58.537 [2024-11-25 14:33:03.408692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.537 [2024-11-25 14:33:03.408780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.537 [2024-11-25 14:33:03.408798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.537 [2024-11-25 14:33:03.408806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.538 [2024-11-25 14:33:03.408813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.538 [2024-11-25 14:33:03.408830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.538 qpair failed and we were unable to recover it. 00:34:58.538 [2024-11-25 14:33:03.418583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.538 [2024-11-25 14:33:03.418646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.538 [2024-11-25 14:33:03.418663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.538 [2024-11-25 14:33:03.418671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.538 [2024-11-25 14:33:03.418677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.538 [2024-11-25 14:33:03.418694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.538 qpair failed and we were unable to recover it. 00:34:58.538 [2024-11-25 14:33:03.428631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.538 [2024-11-25 14:33:03.428699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.538 [2024-11-25 14:33:03.428716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.538 [2024-11-25 14:33:03.428724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.538 [2024-11-25 14:33:03.428730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.538 [2024-11-25 14:33:03.428747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.538 qpair failed and we were unable to recover it. 00:34:58.538 [2024-11-25 14:33:03.438669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.538 [2024-11-25 14:33:03.438747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.538 [2024-11-25 14:33:03.438764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.538 [2024-11-25 14:33:03.438772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.538 [2024-11-25 14:33:03.438778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.538 [2024-11-25 14:33:03.438795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.538 qpair failed and we were unable to recover it. 00:34:58.538 [2024-11-25 14:33:03.448722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.538 [2024-11-25 14:33:03.448835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.538 [2024-11-25 14:33:03.448857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.538 [2024-11-25 14:33:03.448865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.538 [2024-11-25 14:33:03.448871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.538 [2024-11-25 14:33:03.448889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.538 qpair failed and we were unable to recover it. 00:34:58.538 [2024-11-25 14:33:03.458679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.538 [2024-11-25 14:33:03.458770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.538 [2024-11-25 14:33:03.458807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.538 [2024-11-25 14:33:03.458818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.538 [2024-11-25 14:33:03.458825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.538 [2024-11-25 14:33:03.458850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.538 qpair failed and we were unable to recover it. 00:34:58.538 [2024-11-25 14:33:03.468743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.538 [2024-11-25 14:33:03.468850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.538 [2024-11-25 14:33:03.468889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.538 [2024-11-25 14:33:03.468899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.538 [2024-11-25 14:33:03.468906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.538 [2024-11-25 14:33:03.468931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.538 qpair failed and we were unable to recover it. 00:34:58.538 [2024-11-25 14:33:03.478736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.538 [2024-11-25 14:33:03.478815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.538 [2024-11-25 14:33:03.478860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.538 [2024-11-25 14:33:03.478869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.538 [2024-11-25 14:33:03.478877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.538 [2024-11-25 14:33:03.478901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.538 qpair failed and we were unable to recover it. 00:34:58.538 [2024-11-25 14:33:03.488857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.538 [2024-11-25 14:33:03.488940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.538 [2024-11-25 14:33:03.488961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.538 [2024-11-25 14:33:03.488969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.538 [2024-11-25 14:33:03.488975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.538 [2024-11-25 14:33:03.488995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.538 qpair failed and we were unable to recover it. 00:34:58.538 [2024-11-25 14:33:03.498808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.538 [2024-11-25 14:33:03.498870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.538 [2024-11-25 14:33:03.498889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.538 [2024-11-25 14:33:03.498896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.538 [2024-11-25 14:33:03.498903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.538 [2024-11-25 14:33:03.498921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.538 qpair failed and we were unable to recover it. 00:34:58.538 [2024-11-25 14:33:03.508856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.538 [2024-11-25 14:33:03.508928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.538 [2024-11-25 14:33:03.508947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.538 [2024-11-25 14:33:03.508954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.538 [2024-11-25 14:33:03.508961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.538 [2024-11-25 14:33:03.508978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.538 qpair failed and we were unable to recover it. 00:34:58.538 [2024-11-25 14:33:03.518863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.538 [2024-11-25 14:33:03.518931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.538 [2024-11-25 14:33:03.518947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.538 [2024-11-25 14:33:03.518955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.538 [2024-11-25 14:33:03.518969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.538 [2024-11-25 14:33:03.518987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.538 qpair failed and we were unable to recover it. 00:34:58.538 [2024-11-25 14:33:03.528927] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.538 [2024-11-25 14:33:03.529039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.538 [2024-11-25 14:33:03.529057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.538 [2024-11-25 14:33:03.529065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.538 [2024-11-25 14:33:03.529071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.538 [2024-11-25 14:33:03.529089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.538 qpair failed and we were unable to recover it. 00:34:58.538 [2024-11-25 14:33:03.538956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.538 [2024-11-25 14:33:03.539018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.538 [2024-11-25 14:33:03.539035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.538 [2024-11-25 14:33:03.539042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.538 [2024-11-25 14:33:03.539049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.539 [2024-11-25 14:33:03.539066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.539 qpair failed and we were unable to recover it. 00:34:58.539 [2024-11-25 14:33:03.548998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.539 [2024-11-25 14:33:03.549094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.539 [2024-11-25 14:33:03.549111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.539 [2024-11-25 14:33:03.549118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.539 [2024-11-25 14:33:03.549125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.539 [2024-11-25 14:33:03.549142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.539 qpair failed and we were unable to recover it. 00:34:58.539 [2024-11-25 14:33:03.559019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.539 [2024-11-25 14:33:03.559086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.539 [2024-11-25 14:33:03.559103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.539 [2024-11-25 14:33:03.559110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.539 [2024-11-25 14:33:03.559117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.539 [2024-11-25 14:33:03.559133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.539 qpair failed and we were unable to recover it. 00:34:58.539 [2024-11-25 14:33:03.569060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.539 [2024-11-25 14:33:03.569130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.539 [2024-11-25 14:33:03.569148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.539 [2024-11-25 14:33:03.569155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.539 [2024-11-25 14:33:03.569169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.539 [2024-11-25 14:33:03.569186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.539 qpair failed and we were unable to recover it. 00:34:58.539 [2024-11-25 14:33:03.579050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.539 [2024-11-25 14:33:03.579109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.539 [2024-11-25 14:33:03.579126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.539 [2024-11-25 14:33:03.579133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.539 [2024-11-25 14:33:03.579139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.539 [2024-11-25 14:33:03.579155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.539 qpair failed and we were unable to recover it. 00:34:58.539 [2024-11-25 14:33:03.589011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.539 [2024-11-25 14:33:03.589081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.539 [2024-11-25 14:33:03.589099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.539 [2024-11-25 14:33:03.589107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.539 [2024-11-25 14:33:03.589114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.539 [2024-11-25 14:33:03.589130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.539 qpair failed and we were unable to recover it. 00:34:58.539 [2024-11-25 14:33:03.599156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.539 [2024-11-25 14:33:03.599236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.539 [2024-11-25 14:33:03.599255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.539 [2024-11-25 14:33:03.599263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.539 [2024-11-25 14:33:03.599270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.539 [2024-11-25 14:33:03.599287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.539 qpair failed and we were unable to recover it. 00:34:58.539 [2024-11-25 14:33:03.609210] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.539 [2024-11-25 14:33:03.609287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.539 [2024-11-25 14:33:03.609310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.539 [2024-11-25 14:33:03.609319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.539 [2024-11-25 14:33:03.609325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.539 [2024-11-25 14:33:03.609344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.539 qpair failed and we were unable to recover it. 00:34:58.539 [2024-11-25 14:33:03.619198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.539 [2024-11-25 14:33:03.619268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.539 [2024-11-25 14:33:03.619285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.539 [2024-11-25 14:33:03.619293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.539 [2024-11-25 14:33:03.619301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.539 [2024-11-25 14:33:03.619317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.539 qpair failed and we were unable to recover it. 00:34:58.802 [2024-11-25 14:33:03.629275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.802 [2024-11-25 14:33:03.629379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.802 [2024-11-25 14:33:03.629396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.802 [2024-11-25 14:33:03.629404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.802 [2024-11-25 14:33:03.629411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.803 [2024-11-25 14:33:03.629428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.803 qpair failed and we were unable to recover it. 00:34:58.803 [2024-11-25 14:33:03.639321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.803 [2024-11-25 14:33:03.639421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.803 [2024-11-25 14:33:03.639438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.803 [2024-11-25 14:33:03.639446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.803 [2024-11-25 14:33:03.639452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.803 [2024-11-25 14:33:03.639469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.803 qpair failed and we were unable to recover it. 00:34:58.803 [2024-11-25 14:33:03.649324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.803 [2024-11-25 14:33:03.649433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.803 [2024-11-25 14:33:03.649455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.803 [2024-11-25 14:33:03.649463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.803 [2024-11-25 14:33:03.649475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.803 [2024-11-25 14:33:03.649494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.803 qpair failed and we were unable to recover it. 00:34:58.803 [2024-11-25 14:33:03.659328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.803 [2024-11-25 14:33:03.659397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.803 [2024-11-25 14:33:03.659415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.803 [2024-11-25 14:33:03.659423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.803 [2024-11-25 14:33:03.659429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.803 [2024-11-25 14:33:03.659446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.803 qpair failed and we were unable to recover it. 00:34:58.803 [2024-11-25 14:33:03.669348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.803 [2024-11-25 14:33:03.669416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.803 [2024-11-25 14:33:03.669433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.803 [2024-11-25 14:33:03.669441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.803 [2024-11-25 14:33:03.669447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.803 [2024-11-25 14:33:03.669464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.803 qpair failed and we were unable to recover it. 00:34:58.803 [2024-11-25 14:33:03.679386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.803 [2024-11-25 14:33:03.679493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.803 [2024-11-25 14:33:03.679510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.803 [2024-11-25 14:33:03.679518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.803 [2024-11-25 14:33:03.679525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.803 [2024-11-25 14:33:03.679541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.803 qpair failed and we were unable to recover it. 00:34:58.803 [2024-11-25 14:33:03.689488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.803 [2024-11-25 14:33:03.689568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.803 [2024-11-25 14:33:03.689586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.803 [2024-11-25 14:33:03.689593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.803 [2024-11-25 14:33:03.689600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.803 [2024-11-25 14:33:03.689617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.803 qpair failed and we were unable to recover it. 00:34:58.803 [2024-11-25 14:33:03.699449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.803 [2024-11-25 14:33:03.699510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.803 [2024-11-25 14:33:03.699528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.803 [2024-11-25 14:33:03.699535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.803 [2024-11-25 14:33:03.699542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.803 [2024-11-25 14:33:03.699558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.803 qpair failed and we were unable to recover it. 00:34:58.803 [2024-11-25 14:33:03.709480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.803 [2024-11-25 14:33:03.709557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.803 [2024-11-25 14:33:03.709575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.803 [2024-11-25 14:33:03.709583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.803 [2024-11-25 14:33:03.709589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.803 [2024-11-25 14:33:03.709606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.803 qpair failed and we were unable to recover it. 00:34:58.803 [2024-11-25 14:33:03.719520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.803 [2024-11-25 14:33:03.719587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.803 [2024-11-25 14:33:03.719603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.803 [2024-11-25 14:33:03.719611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.803 [2024-11-25 14:33:03.719617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.803 [2024-11-25 14:33:03.719634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.803 qpair failed and we were unable to recover it. 00:34:58.803 [2024-11-25 14:33:03.729588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.803 [2024-11-25 14:33:03.729668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.803 [2024-11-25 14:33:03.729686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.803 [2024-11-25 14:33:03.729693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.803 [2024-11-25 14:33:03.729700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.803 [2024-11-25 14:33:03.729717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.803 qpair failed and we were unable to recover it. 00:34:58.803 [2024-11-25 14:33:03.739552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.803 [2024-11-25 14:33:03.739617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.803 [2024-11-25 14:33:03.739639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.803 [2024-11-25 14:33:03.739647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.803 [2024-11-25 14:33:03.739653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.803 [2024-11-25 14:33:03.739670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.803 qpair failed and we were unable to recover it. 00:34:58.803 [2024-11-25 14:33:03.749604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.803 [2024-11-25 14:33:03.749661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.803 [2024-11-25 14:33:03.749678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.803 [2024-11-25 14:33:03.749685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.803 [2024-11-25 14:33:03.749692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.803 [2024-11-25 14:33:03.749709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.803 qpair failed and we were unable to recover it. 00:34:58.803 [2024-11-25 14:33:03.759599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.803 [2024-11-25 14:33:03.759668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.803 [2024-11-25 14:33:03.759685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.803 [2024-11-25 14:33:03.759693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.803 [2024-11-25 14:33:03.759700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.803 [2024-11-25 14:33:03.759716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.804 qpair failed and we were unable to recover it. 00:34:58.804 [2024-11-25 14:33:03.769660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.804 [2024-11-25 14:33:03.769742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.804 [2024-11-25 14:33:03.769759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.804 [2024-11-25 14:33:03.769766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.804 [2024-11-25 14:33:03.769773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.804 [2024-11-25 14:33:03.769789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.804 qpair failed and we were unable to recover it. 00:34:58.804 [2024-11-25 14:33:03.779707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.804 [2024-11-25 14:33:03.779767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.804 [2024-11-25 14:33:03.779783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.804 [2024-11-25 14:33:03.779791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.804 [2024-11-25 14:33:03.779803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.804 [2024-11-25 14:33:03.779820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.804 qpair failed and we were unable to recover it. 00:34:58.804 [2024-11-25 14:33:03.789687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.804 [2024-11-25 14:33:03.789753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.804 [2024-11-25 14:33:03.789771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.804 [2024-11-25 14:33:03.789778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.804 [2024-11-25 14:33:03.789785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.804 [2024-11-25 14:33:03.789802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.804 qpair failed and we were unable to recover it. 00:34:58.804 [2024-11-25 14:33:03.799766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.804 [2024-11-25 14:33:03.799836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.804 [2024-11-25 14:33:03.799852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.804 [2024-11-25 14:33:03.799860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.804 [2024-11-25 14:33:03.799866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.804 [2024-11-25 14:33:03.799884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.804 qpair failed and we were unable to recover it. 00:34:58.804 [2024-11-25 14:33:03.809824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.804 [2024-11-25 14:33:03.809890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.804 [2024-11-25 14:33:03.809908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.804 [2024-11-25 14:33:03.809915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.804 [2024-11-25 14:33:03.809922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.804 [2024-11-25 14:33:03.809939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.804 qpair failed and we were unable to recover it. 00:34:58.804 [2024-11-25 14:33:03.819849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.804 [2024-11-25 14:33:03.819915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.804 [2024-11-25 14:33:03.819953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.804 [2024-11-25 14:33:03.819962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.804 [2024-11-25 14:33:03.819969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.804 [2024-11-25 14:33:03.819994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.804 qpair failed and we were unable to recover it. 00:34:58.804 [2024-11-25 14:33:03.829886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.804 [2024-11-25 14:33:03.829948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.804 [2024-11-25 14:33:03.829971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.804 [2024-11-25 14:33:03.829979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.804 [2024-11-25 14:33:03.829986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.804 [2024-11-25 14:33:03.830005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.804 qpair failed and we were unable to recover it. 00:34:58.804 [2024-11-25 14:33:03.839923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.804 [2024-11-25 14:33:03.840036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.804 [2024-11-25 14:33:03.840055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.804 [2024-11-25 14:33:03.840063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.804 [2024-11-25 14:33:03.840069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.804 [2024-11-25 14:33:03.840087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.804 qpair failed and we were unable to recover it. 00:34:58.804 [2024-11-25 14:33:03.849924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.804 [2024-11-25 14:33:03.849997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.804 [2024-11-25 14:33:03.850014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.804 [2024-11-25 14:33:03.850022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.804 [2024-11-25 14:33:03.850029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.804 [2024-11-25 14:33:03.850046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.804 qpair failed and we were unable to recover it. 00:34:58.804 [2024-11-25 14:33:03.859915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.804 [2024-11-25 14:33:03.859975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.804 [2024-11-25 14:33:03.859992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.804 [2024-11-25 14:33:03.860000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.804 [2024-11-25 14:33:03.860006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.804 [2024-11-25 14:33:03.860024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.804 qpair failed and we were unable to recover it. 00:34:58.804 [2024-11-25 14:33:03.870016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.804 [2024-11-25 14:33:03.870083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.804 [2024-11-25 14:33:03.870105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.804 [2024-11-25 14:33:03.870113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.804 [2024-11-25 14:33:03.870119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.804 [2024-11-25 14:33:03.870137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.804 qpair failed and we were unable to recover it. 00:34:58.804 [2024-11-25 14:33:03.879880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:58.804 [2024-11-25 14:33:03.879944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:58.804 [2024-11-25 14:33:03.879965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:58.804 [2024-11-25 14:33:03.879973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:58.804 [2024-11-25 14:33:03.879980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:58.804 [2024-11-25 14:33:03.879999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:58.804 qpair failed and we were unable to recover it. 00:34:59.069 [2024-11-25 14:33:03.890070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.069 [2024-11-25 14:33:03.890150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.069 [2024-11-25 14:33:03.890175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.069 [2024-11-25 14:33:03.890183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.069 [2024-11-25 14:33:03.890191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.069 [2024-11-25 14:33:03.890211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.069 qpair failed and we were unable to recover it. 00:34:59.069 [2024-11-25 14:33:03.900011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.069 [2024-11-25 14:33:03.900085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.069 [2024-11-25 14:33:03.900105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.069 [2024-11-25 14:33:03.900112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.069 [2024-11-25 14:33:03.900119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.069 [2024-11-25 14:33:03.900136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.069 qpair failed and we were unable to recover it. 00:34:59.069 [2024-11-25 14:33:03.910069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.069 [2024-11-25 14:33:03.910184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.069 [2024-11-25 14:33:03.910203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.069 [2024-11-25 14:33:03.910210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.069 [2024-11-25 14:33:03.910224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.069 [2024-11-25 14:33:03.910241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.069 qpair failed and we were unable to recover it. 00:34:59.069 [2024-11-25 14:33:03.920131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.069 [2024-11-25 14:33:03.920211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.069 [2024-11-25 14:33:03.920228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.069 [2024-11-25 14:33:03.920236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.069 [2024-11-25 14:33:03.920242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.069 [2024-11-25 14:33:03.920260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.069 qpair failed and we were unable to recover it. 00:34:59.069 [2024-11-25 14:33:03.930182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.070 [2024-11-25 14:33:03.930246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.070 [2024-11-25 14:33:03.930263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.070 [2024-11-25 14:33:03.930271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.070 [2024-11-25 14:33:03.930278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.070 [2024-11-25 14:33:03.930295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.070 qpair failed and we were unable to recover it. 00:34:59.070 [2024-11-25 14:33:03.940113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.070 [2024-11-25 14:33:03.940183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.070 [2024-11-25 14:33:03.940203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.070 [2024-11-25 14:33:03.940216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.070 [2024-11-25 14:33:03.940222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.070 [2024-11-25 14:33:03.940241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.070 qpair failed and we were unable to recover it. 00:34:59.070 [2024-11-25 14:33:03.950198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.070 [2024-11-25 14:33:03.950302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.070 [2024-11-25 14:33:03.950321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.070 [2024-11-25 14:33:03.950329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.070 [2024-11-25 14:33:03.950336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.070 [2024-11-25 14:33:03.950353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.070 qpair failed and we were unable to recover it. 00:34:59.070 [2024-11-25 14:33:03.960099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.070 [2024-11-25 14:33:03.960177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.070 [2024-11-25 14:33:03.960195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.070 [2024-11-25 14:33:03.960202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.070 [2024-11-25 14:33:03.960209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.070 [2024-11-25 14:33:03.960226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.070 qpair failed and we were unable to recover it. 00:34:59.070 [2024-11-25 14:33:03.970279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.070 [2024-11-25 14:33:03.970362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.070 [2024-11-25 14:33:03.970378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.070 [2024-11-25 14:33:03.970386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.070 [2024-11-25 14:33:03.970393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.070 [2024-11-25 14:33:03.970409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.070 qpair failed and we were unable to recover it. 00:34:59.070 [2024-11-25 14:33:03.980275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.070 [2024-11-25 14:33:03.980367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.070 [2024-11-25 14:33:03.980383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.070 [2024-11-25 14:33:03.980391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.070 [2024-11-25 14:33:03.980397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.070 [2024-11-25 14:33:03.980414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.070 qpair failed and we were unable to recover it. 00:34:59.070 [2024-11-25 14:33:03.990976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.070 [2024-11-25 14:33:03.991052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.070 [2024-11-25 14:33:03.991071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.070 [2024-11-25 14:33:03.991078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.070 [2024-11-25 14:33:03.991085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.070 [2024-11-25 14:33:03.991102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.070 qpair failed and we were unable to recover it. 00:34:59.070 [2024-11-25 14:33:04.000221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.070 [2024-11-25 14:33:04.000305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.070 [2024-11-25 14:33:04.000329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.070 [2024-11-25 14:33:04.000337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.070 [2024-11-25 14:33:04.000343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.070 [2024-11-25 14:33:04.000360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.070 qpair failed and we were unable to recover it. 00:34:59.070 [2024-11-25 14:33:04.010421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.070 [2024-11-25 14:33:04.010540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.070 [2024-11-25 14:33:04.010557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.070 [2024-11-25 14:33:04.010564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.070 [2024-11-25 14:33:04.010571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.070 [2024-11-25 14:33:04.010587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.070 qpair failed and we were unable to recover it. 00:34:59.070 [2024-11-25 14:33:04.020427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.070 [2024-11-25 14:33:04.020485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.070 [2024-11-25 14:33:04.020502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.070 [2024-11-25 14:33:04.020510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.070 [2024-11-25 14:33:04.020517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.070 [2024-11-25 14:33:04.020534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.070 qpair failed and we were unable to recover it. 00:34:59.070 [2024-11-25 14:33:04.030459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.070 [2024-11-25 14:33:04.030524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.070 [2024-11-25 14:33:04.030542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.070 [2024-11-25 14:33:04.030550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.070 [2024-11-25 14:33:04.030556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.070 [2024-11-25 14:33:04.030573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.070 qpair failed and we were unable to recover it. 00:34:59.070 [2024-11-25 14:33:04.040474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.070 [2024-11-25 14:33:04.040543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.070 [2024-11-25 14:33:04.040560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.070 [2024-11-25 14:33:04.040568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.070 [2024-11-25 14:33:04.040580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.070 [2024-11-25 14:33:04.040597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.070 qpair failed and we were unable to recover it. 00:34:59.070 [2024-11-25 14:33:04.050524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.070 [2024-11-25 14:33:04.050625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.070 [2024-11-25 14:33:04.050641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.070 [2024-11-25 14:33:04.050649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.070 [2024-11-25 14:33:04.050656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.070 [2024-11-25 14:33:04.050673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.070 qpair failed and we were unable to recover it. 00:34:59.070 [2024-11-25 14:33:04.060543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.071 [2024-11-25 14:33:04.060616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.071 [2024-11-25 14:33:04.060636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.071 [2024-11-25 14:33:04.060646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.071 [2024-11-25 14:33:04.060655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.071 [2024-11-25 14:33:04.060673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.071 qpair failed and we were unable to recover it. 00:34:59.071 [2024-11-25 14:33:04.070550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.071 [2024-11-25 14:33:04.070644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.071 [2024-11-25 14:33:04.070662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.071 [2024-11-25 14:33:04.070669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.071 [2024-11-25 14:33:04.070675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.071 [2024-11-25 14:33:04.070691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.071 qpair failed and we were unable to recover it. 00:34:59.071 [2024-11-25 14:33:04.080615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.071 [2024-11-25 14:33:04.080687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.071 [2024-11-25 14:33:04.080704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.071 [2024-11-25 14:33:04.080712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.071 [2024-11-25 14:33:04.080718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.071 [2024-11-25 14:33:04.080735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.071 qpair failed and we were unable to recover it. 00:34:59.071 [2024-11-25 14:33:04.090669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.071 [2024-11-25 14:33:04.090762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.071 [2024-11-25 14:33:04.090786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.071 [2024-11-25 14:33:04.090794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.071 [2024-11-25 14:33:04.090802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.071 [2024-11-25 14:33:04.090821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.071 qpair failed and we were unable to recover it. 00:34:59.071 [2024-11-25 14:33:04.100714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.071 [2024-11-25 14:33:04.100806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.071 [2024-11-25 14:33:04.100824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.071 [2024-11-25 14:33:04.100832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.071 [2024-11-25 14:33:04.100838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.071 [2024-11-25 14:33:04.100856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.071 qpair failed and we were unable to recover it. 00:34:59.071 [2024-11-25 14:33:04.110704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.071 [2024-11-25 14:33:04.110796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.071 [2024-11-25 14:33:04.110814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.071 [2024-11-25 14:33:04.110821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.071 [2024-11-25 14:33:04.110828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.071 [2024-11-25 14:33:04.110845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.071 qpair failed and we were unable to recover it. 00:34:59.071 [2024-11-25 14:33:04.120731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.071 [2024-11-25 14:33:04.120798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.071 [2024-11-25 14:33:04.120814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.071 [2024-11-25 14:33:04.120822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.071 [2024-11-25 14:33:04.120829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.071 [2024-11-25 14:33:04.120846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.071 qpair failed and we were unable to recover it. 00:34:59.071 [2024-11-25 14:33:04.130759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.071 [2024-11-25 14:33:04.130825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.071 [2024-11-25 14:33:04.130850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.071 [2024-11-25 14:33:04.130857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.071 [2024-11-25 14:33:04.130864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.071 [2024-11-25 14:33:04.130880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.071 qpair failed and we were unable to recover it. 00:34:59.071 [2024-11-25 14:33:04.140767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.071 [2024-11-25 14:33:04.140837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.071 [2024-11-25 14:33:04.140876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.071 [2024-11-25 14:33:04.140886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.071 [2024-11-25 14:33:04.140894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.071 [2024-11-25 14:33:04.140918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.071 qpair failed and we were unable to recover it. 00:34:59.071 [2024-11-25 14:33:04.150811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.071 [2024-11-25 14:33:04.150882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.071 [2024-11-25 14:33:04.150920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.071 [2024-11-25 14:33:04.150929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.071 [2024-11-25 14:33:04.150937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.071 [2024-11-25 14:33:04.150961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.071 qpair failed and we were unable to recover it. 00:34:59.336 [2024-11-25 14:33:04.160831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.336 [2024-11-25 14:33:04.160906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.336 [2024-11-25 14:33:04.160937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.336 [2024-11-25 14:33:04.160945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.336 [2024-11-25 14:33:04.160952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.336 [2024-11-25 14:33:04.160973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.336 qpair failed and we were unable to recover it. 00:34:59.336 [2024-11-25 14:33:04.170889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.336 [2024-11-25 14:33:04.170970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.336 [2024-11-25 14:33:04.170990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.336 [2024-11-25 14:33:04.170998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.336 [2024-11-25 14:33:04.171012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.336 [2024-11-25 14:33:04.171030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.336 qpair failed and we were unable to recover it. 00:34:59.336 [2024-11-25 14:33:04.180904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.336 [2024-11-25 14:33:04.181019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.336 [2024-11-25 14:33:04.181038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.336 [2024-11-25 14:33:04.181045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.336 [2024-11-25 14:33:04.181051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.336 [2024-11-25 14:33:04.181069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.336 qpair failed and we were unable to recover it. 00:34:59.336 [2024-11-25 14:33:04.190955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.336 [2024-11-25 14:33:04.191018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.336 [2024-11-25 14:33:04.191035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.336 [2024-11-25 14:33:04.191043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.336 [2024-11-25 14:33:04.191049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.336 [2024-11-25 14:33:04.191067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.336 qpair failed and we were unable to recover it. 00:34:59.336 [2024-11-25 14:33:04.200944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.337 [2024-11-25 14:33:04.201028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.337 [2024-11-25 14:33:04.201045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.337 [2024-11-25 14:33:04.201053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.337 [2024-11-25 14:33:04.201059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.337 [2024-11-25 14:33:04.201075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.337 qpair failed and we were unable to recover it. 00:34:59.337 [2024-11-25 14:33:04.211048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.337 [2024-11-25 14:33:04.211124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.337 [2024-11-25 14:33:04.211141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.337 [2024-11-25 14:33:04.211149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.337 [2024-11-25 14:33:04.211155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.337 [2024-11-25 14:33:04.211178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.337 qpair failed and we were unable to recover it. 00:34:59.337 [2024-11-25 14:33:04.221010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.337 [2024-11-25 14:33:04.221080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.337 [2024-11-25 14:33:04.221098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.337 [2024-11-25 14:33:04.221106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.337 [2024-11-25 14:33:04.221112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.337 [2024-11-25 14:33:04.221130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.337 qpair failed and we were unable to recover it. 00:34:59.337 [2024-11-25 14:33:04.230906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.337 [2024-11-25 14:33:04.230963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.337 [2024-11-25 14:33:04.230980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.337 [2024-11-25 14:33:04.230988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.337 [2024-11-25 14:33:04.230994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.337 [2024-11-25 14:33:04.231011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.337 qpair failed and we were unable to recover it. 00:34:59.337 [2024-11-25 14:33:04.241052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.337 [2024-11-25 14:33:04.241115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.337 [2024-11-25 14:33:04.241133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.337 [2024-11-25 14:33:04.241140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.337 [2024-11-25 14:33:04.241147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.337 [2024-11-25 14:33:04.241171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.337 qpair failed and we were unable to recover it. 00:34:59.337 [2024-11-25 14:33:04.251135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.337 [2024-11-25 14:33:04.251206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.337 [2024-11-25 14:33:04.251224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.337 [2024-11-25 14:33:04.251232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.337 [2024-11-25 14:33:04.251238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.337 [2024-11-25 14:33:04.251255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.337 qpair failed and we were unable to recover it. 00:34:59.337 [2024-11-25 14:33:04.261121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.337 [2024-11-25 14:33:04.261191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.337 [2024-11-25 14:33:04.261213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.337 [2024-11-25 14:33:04.261221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.337 [2024-11-25 14:33:04.261227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.337 [2024-11-25 14:33:04.261244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.337 qpair failed and we were unable to recover it. 00:34:59.337 [2024-11-25 14:33:04.271143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.337 [2024-11-25 14:33:04.271212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.337 [2024-11-25 14:33:04.271229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.337 [2024-11-25 14:33:04.271237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.337 [2024-11-25 14:33:04.271243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.337 [2024-11-25 14:33:04.271260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.337 qpair failed and we were unable to recover it. 00:34:59.337 [2024-11-25 14:33:04.281190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.337 [2024-11-25 14:33:04.281259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.337 [2024-11-25 14:33:04.281274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.337 [2024-11-25 14:33:04.281282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.337 [2024-11-25 14:33:04.281289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.337 [2024-11-25 14:33:04.281305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.337 qpair failed and we were unable to recover it. 00:34:59.337 [2024-11-25 14:33:04.291307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.337 [2024-11-25 14:33:04.291374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.337 [2024-11-25 14:33:04.291393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.337 [2024-11-25 14:33:04.291401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.337 [2024-11-25 14:33:04.291407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.337 [2024-11-25 14:33:04.291424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.337 qpair failed and we were unable to recover it. 00:34:59.337 [2024-11-25 14:33:04.301252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.337 [2024-11-25 14:33:04.301357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.337 [2024-11-25 14:33:04.301374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.337 [2024-11-25 14:33:04.301382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.337 [2024-11-25 14:33:04.301395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.337 [2024-11-25 14:33:04.301411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.337 qpair failed and we were unable to recover it. 00:34:59.337 [2024-11-25 14:33:04.311261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.337 [2024-11-25 14:33:04.311325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.337 [2024-11-25 14:33:04.311343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.337 [2024-11-25 14:33:04.311350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.337 [2024-11-25 14:33:04.311357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.337 [2024-11-25 14:33:04.311373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.337 qpair failed and we were unable to recover it. 00:34:59.337 [2024-11-25 14:33:04.321253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.337 [2024-11-25 14:33:04.321328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.337 [2024-11-25 14:33:04.321344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.337 [2024-11-25 14:33:04.321352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.337 [2024-11-25 14:33:04.321358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.337 [2024-11-25 14:33:04.321375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.337 qpair failed and we were unable to recover it. 00:34:59.337 [2024-11-25 14:33:04.331383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.338 [2024-11-25 14:33:04.331458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.338 [2024-11-25 14:33:04.331474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.338 [2024-11-25 14:33:04.331481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.338 [2024-11-25 14:33:04.331488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.338 [2024-11-25 14:33:04.331505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.338 qpair failed and we were unable to recover it. 00:34:59.338 [2024-11-25 14:33:04.341370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.338 [2024-11-25 14:33:04.341444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.338 [2024-11-25 14:33:04.341461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.338 [2024-11-25 14:33:04.341468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.338 [2024-11-25 14:33:04.341474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.338 [2024-11-25 14:33:04.341491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.338 qpair failed and we were unable to recover it. 00:34:59.338 [2024-11-25 14:33:04.351276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.338 [2024-11-25 14:33:04.351369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.338 [2024-11-25 14:33:04.351386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.338 [2024-11-25 14:33:04.351393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.338 [2024-11-25 14:33:04.351400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.338 [2024-11-25 14:33:04.351417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.338 qpair failed and we were unable to recover it. 00:34:59.338 [2024-11-25 14:33:04.361431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.338 [2024-11-25 14:33:04.361502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.338 [2024-11-25 14:33:04.361519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.338 [2024-11-25 14:33:04.361526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.338 [2024-11-25 14:33:04.361533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.338 [2024-11-25 14:33:04.361550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.338 qpair failed and we were unable to recover it. 00:34:59.338 [2024-11-25 14:33:04.371534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.338 [2024-11-25 14:33:04.371636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.338 [2024-11-25 14:33:04.371652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.338 [2024-11-25 14:33:04.371660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.338 [2024-11-25 14:33:04.371667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.338 [2024-11-25 14:33:04.371683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.338 qpair failed and we were unable to recover it. 00:34:59.338 [2024-11-25 14:33:04.381485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.338 [2024-11-25 14:33:04.381546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.338 [2024-11-25 14:33:04.381564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.338 [2024-11-25 14:33:04.381572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.338 [2024-11-25 14:33:04.381579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.338 [2024-11-25 14:33:04.381596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.338 qpair failed and we were unable to recover it. 00:34:59.338 [2024-11-25 14:33:04.391515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.338 [2024-11-25 14:33:04.391579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.338 [2024-11-25 14:33:04.391601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.338 [2024-11-25 14:33:04.391608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.338 [2024-11-25 14:33:04.391615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.338 [2024-11-25 14:33:04.391632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.338 qpair failed and we were unable to recover it. 00:34:59.338 [2024-11-25 14:33:04.401420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.338 [2024-11-25 14:33:04.401486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.338 [2024-11-25 14:33:04.401503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.338 [2024-11-25 14:33:04.401511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.338 [2024-11-25 14:33:04.401517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.338 [2024-11-25 14:33:04.401534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.338 qpair failed and we were unable to recover it. 00:34:59.338 [2024-11-25 14:33:04.411639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.338 [2024-11-25 14:33:04.411711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.338 [2024-11-25 14:33:04.411728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.338 [2024-11-25 14:33:04.411735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.338 [2024-11-25 14:33:04.411741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.338 [2024-11-25 14:33:04.411758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.338 qpair failed and we were unable to recover it. 00:34:59.338 [2024-11-25 14:33:04.421599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.338 [2024-11-25 14:33:04.421680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.338 [2024-11-25 14:33:04.421696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.338 [2024-11-25 14:33:04.421703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.338 [2024-11-25 14:33:04.421709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.338 [2024-11-25 14:33:04.421726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.338 qpair failed and we were unable to recover it. 00:34:59.601 [2024-11-25 14:33:04.431638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.601 [2024-11-25 14:33:04.431696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.601 [2024-11-25 14:33:04.431712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.601 [2024-11-25 14:33:04.431719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.601 [2024-11-25 14:33:04.431732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.601 [2024-11-25 14:33:04.431748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.601 qpair failed and we were unable to recover it. 00:34:59.601 [2024-11-25 14:33:04.441671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.601 [2024-11-25 14:33:04.441743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.601 [2024-11-25 14:33:04.441760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.601 [2024-11-25 14:33:04.441767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.601 [2024-11-25 14:33:04.441773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.601 [2024-11-25 14:33:04.441789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.601 qpair failed and we were unable to recover it. 00:34:59.601 [2024-11-25 14:33:04.451763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.601 [2024-11-25 14:33:04.451850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.601 [2024-11-25 14:33:04.451866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.601 [2024-11-25 14:33:04.451873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.601 [2024-11-25 14:33:04.451880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.601 [2024-11-25 14:33:04.451896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.601 qpair failed and we were unable to recover it. 00:34:59.601 [2024-11-25 14:33:04.461707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.601 [2024-11-25 14:33:04.461766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.601 [2024-11-25 14:33:04.461783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.601 [2024-11-25 14:33:04.461791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.601 [2024-11-25 14:33:04.461797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.602 [2024-11-25 14:33:04.461814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.602 qpair failed and we were unable to recover it. 00:34:59.602 [2024-11-25 14:33:04.471786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.602 [2024-11-25 14:33:04.471858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.602 [2024-11-25 14:33:04.471874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.602 [2024-11-25 14:33:04.471881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.602 [2024-11-25 14:33:04.471887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.602 [2024-11-25 14:33:04.471904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.602 qpair failed and we were unable to recover it. 00:34:59.602 [2024-11-25 14:33:04.481832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.602 [2024-11-25 14:33:04.481917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.602 [2024-11-25 14:33:04.481955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.602 [2024-11-25 14:33:04.481964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.602 [2024-11-25 14:33:04.481972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.602 [2024-11-25 14:33:04.481998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.602 qpair failed and we were unable to recover it. 00:34:59.602 [2024-11-25 14:33:04.491755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.602 [2024-11-25 14:33:04.491822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.602 [2024-11-25 14:33:04.491845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.602 [2024-11-25 14:33:04.491853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.602 [2024-11-25 14:33:04.491860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.602 [2024-11-25 14:33:04.491880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.602 qpair failed and we were unable to recover it. 00:34:59.602 [2024-11-25 14:33:04.501907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.602 [2024-11-25 14:33:04.501971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.602 [2024-11-25 14:33:04.501990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.602 [2024-11-25 14:33:04.501997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.602 [2024-11-25 14:33:04.502004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.602 [2024-11-25 14:33:04.502022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.602 qpair failed and we were unable to recover it. 00:34:59.602 [2024-11-25 14:33:04.511912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.602 [2024-11-25 14:33:04.511980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.602 [2024-11-25 14:33:04.511998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.602 [2024-11-25 14:33:04.512005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.602 [2024-11-25 14:33:04.512012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.602 [2024-11-25 14:33:04.512029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.602 qpair failed and we were unable to recover it. 00:34:59.602 [2024-11-25 14:33:04.521950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.602 [2024-11-25 14:33:04.522019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.602 [2024-11-25 14:33:04.522042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.602 [2024-11-25 14:33:04.522050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.602 [2024-11-25 14:33:04.522056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.602 [2024-11-25 14:33:04.522073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.602 qpair failed and we were unable to recover it. 00:34:59.602 [2024-11-25 14:33:04.531891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.602 [2024-11-25 14:33:04.531966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.602 [2024-11-25 14:33:04.531983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.602 [2024-11-25 14:33:04.531990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.602 [2024-11-25 14:33:04.531997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.602 [2024-11-25 14:33:04.532014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.602 qpair failed and we were unable to recover it. 00:34:59.602 [2024-11-25 14:33:04.542051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.602 [2024-11-25 14:33:04.542115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.602 [2024-11-25 14:33:04.542133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.602 [2024-11-25 14:33:04.542140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.602 [2024-11-25 14:33:04.542147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.602 [2024-11-25 14:33:04.542168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.602 qpair failed and we were unable to recover it. 00:34:59.602 [2024-11-25 14:33:04.552048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.602 [2024-11-25 14:33:04.552108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.602 [2024-11-25 14:33:04.552126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.602 [2024-11-25 14:33:04.552133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.602 [2024-11-25 14:33:04.552140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.602 [2024-11-25 14:33:04.552163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.602 qpair failed and we were unable to recover it. 00:34:59.602 [2024-11-25 14:33:04.562109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.602 [2024-11-25 14:33:04.562229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.602 [2024-11-25 14:33:04.562247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.602 [2024-11-25 14:33:04.562254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.602 [2024-11-25 14:33:04.562266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.602 [2024-11-25 14:33:04.562284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.602 qpair failed and we were unable to recover it. 00:34:59.602 [2024-11-25 14:33:04.572123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.602 [2024-11-25 14:33:04.572204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.602 [2024-11-25 14:33:04.572222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.602 [2024-11-25 14:33:04.572230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.602 [2024-11-25 14:33:04.572236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.602 [2024-11-25 14:33:04.572252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.602 qpair failed and we were unable to recover it. 00:34:59.602 [2024-11-25 14:33:04.582091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.602 [2024-11-25 14:33:04.582156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.602 [2024-11-25 14:33:04.582178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.602 [2024-11-25 14:33:04.582186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.602 [2024-11-25 14:33:04.582192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.602 [2024-11-25 14:33:04.582209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.602 qpair failed and we were unable to recover it. 00:34:59.602 [2024-11-25 14:33:04.592147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.602 [2024-11-25 14:33:04.592214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.602 [2024-11-25 14:33:04.592231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.602 [2024-11-25 14:33:04.592238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.602 [2024-11-25 14:33:04.592245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.602 [2024-11-25 14:33:04.592262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.602 qpair failed and we were unable to recover it. 00:34:59.603 [2024-11-25 14:33:04.602217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.603 [2024-11-25 14:33:04.602291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.603 [2024-11-25 14:33:04.602307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.603 [2024-11-25 14:33:04.602315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.603 [2024-11-25 14:33:04.602321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.603 [2024-11-25 14:33:04.602339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.603 qpair failed and we were unable to recover it. 00:34:59.603 [2024-11-25 14:33:04.612252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.603 [2024-11-25 14:33:04.612326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.603 [2024-11-25 14:33:04.612343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.603 [2024-11-25 14:33:04.612351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.603 [2024-11-25 14:33:04.612358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.603 [2024-11-25 14:33:04.612375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.603 qpair failed and we were unable to recover it. 00:34:59.603 [2024-11-25 14:33:04.622268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.603 [2024-11-25 14:33:04.622348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.603 [2024-11-25 14:33:04.622364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.603 [2024-11-25 14:33:04.622372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.603 [2024-11-25 14:33:04.622378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.603 [2024-11-25 14:33:04.622395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.603 qpair failed and we were unable to recover it. 00:34:59.603 [2024-11-25 14:33:04.632269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.603 [2024-11-25 14:33:04.632329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.603 [2024-11-25 14:33:04.632345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.603 [2024-11-25 14:33:04.632353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.603 [2024-11-25 14:33:04.632359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.603 [2024-11-25 14:33:04.632376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.603 qpair failed and we were unable to recover it. 00:34:59.603 [2024-11-25 14:33:04.642307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.603 [2024-11-25 14:33:04.642377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.603 [2024-11-25 14:33:04.642394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.603 [2024-11-25 14:33:04.642401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.603 [2024-11-25 14:33:04.642407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.603 [2024-11-25 14:33:04.642424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.603 qpair failed and we were unable to recover it. 00:34:59.603 [2024-11-25 14:33:04.652376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.603 [2024-11-25 14:33:04.652450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.603 [2024-11-25 14:33:04.652473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.603 [2024-11-25 14:33:04.652481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.603 [2024-11-25 14:33:04.652488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.603 [2024-11-25 14:33:04.652505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.603 qpair failed and we were unable to recover it. 00:34:59.603 [2024-11-25 14:33:04.662343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.603 [2024-11-25 14:33:04.662447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.603 [2024-11-25 14:33:04.662464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.603 [2024-11-25 14:33:04.662472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.603 [2024-11-25 14:33:04.662478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.603 [2024-11-25 14:33:04.662495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.603 qpair failed and we were unable to recover it. 00:34:59.603 [2024-11-25 14:33:04.672425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.603 [2024-11-25 14:33:04.672497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.603 [2024-11-25 14:33:04.672518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.603 [2024-11-25 14:33:04.672525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.603 [2024-11-25 14:33:04.672532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.603 [2024-11-25 14:33:04.672549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.603 qpair failed and we were unable to recover it. 00:34:59.603 [2024-11-25 14:33:04.682328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.603 [2024-11-25 14:33:04.682396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.603 [2024-11-25 14:33:04.682412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.603 [2024-11-25 14:33:04.682420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.603 [2024-11-25 14:33:04.682426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.603 [2024-11-25 14:33:04.682443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.603 qpair failed and we were unable to recover it. 00:34:59.866 [2024-11-25 14:33:04.692534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.866 [2024-11-25 14:33:04.692605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.866 [2024-11-25 14:33:04.692623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.866 [2024-11-25 14:33:04.692631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.866 [2024-11-25 14:33:04.692643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.866 [2024-11-25 14:33:04.692661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.866 qpair failed and we were unable to recover it. 00:34:59.866 [2024-11-25 14:33:04.702501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.866 [2024-11-25 14:33:04.702566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.866 [2024-11-25 14:33:04.702584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.866 [2024-11-25 14:33:04.702592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.866 [2024-11-25 14:33:04.702598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.866 [2024-11-25 14:33:04.702615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.866 qpair failed and we were unable to recover it. 00:34:59.866 [2024-11-25 14:33:04.712517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.866 [2024-11-25 14:33:04.712585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.866 [2024-11-25 14:33:04.712602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.866 [2024-11-25 14:33:04.712611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.866 [2024-11-25 14:33:04.712617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.866 [2024-11-25 14:33:04.712634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.866 qpair failed and we were unable to recover it. 00:34:59.866 [2024-11-25 14:33:04.722537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.866 [2024-11-25 14:33:04.722633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.866 [2024-11-25 14:33:04.722650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.866 [2024-11-25 14:33:04.722658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.866 [2024-11-25 14:33:04.722666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.866 [2024-11-25 14:33:04.722683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.866 qpair failed and we were unable to recover it. 00:34:59.866 [2024-11-25 14:33:04.732513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.866 [2024-11-25 14:33:04.732587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.866 [2024-11-25 14:33:04.732605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.866 [2024-11-25 14:33:04.732613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.866 [2024-11-25 14:33:04.732619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.866 [2024-11-25 14:33:04.732636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.866 qpair failed and we were unable to recover it. 00:34:59.866 [2024-11-25 14:33:04.742607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.866 [2024-11-25 14:33:04.742672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.866 [2024-11-25 14:33:04.742689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.866 [2024-11-25 14:33:04.742697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.866 [2024-11-25 14:33:04.742704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.866 [2024-11-25 14:33:04.742722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.866 qpair failed and we were unable to recover it. 00:34:59.866 [2024-11-25 14:33:04.752682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.866 [2024-11-25 14:33:04.752785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.866 [2024-11-25 14:33:04.752801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.866 [2024-11-25 14:33:04.752809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.866 [2024-11-25 14:33:04.752816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.866 [2024-11-25 14:33:04.752832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.866 qpair failed and we were unable to recover it. 00:34:59.866 [2024-11-25 14:33:04.762741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.866 [2024-11-25 14:33:04.762857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.867 [2024-11-25 14:33:04.762874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.867 [2024-11-25 14:33:04.762881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.867 [2024-11-25 14:33:04.762888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.867 [2024-11-25 14:33:04.762904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.867 qpair failed and we were unable to recover it. 00:34:59.867 [2024-11-25 14:33:04.772779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.867 [2024-11-25 14:33:04.772844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.867 [2024-11-25 14:33:04.772861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.867 [2024-11-25 14:33:04.772868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.867 [2024-11-25 14:33:04.772874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.867 [2024-11-25 14:33:04.772891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.867 qpair failed and we were unable to recover it. 00:34:59.867 [2024-11-25 14:33:04.782728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.867 [2024-11-25 14:33:04.782810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.867 [2024-11-25 14:33:04.782832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.867 [2024-11-25 14:33:04.782839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.867 [2024-11-25 14:33:04.782845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.867 [2024-11-25 14:33:04.782863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.867 qpair failed and we were unable to recover it. 00:34:59.867 [2024-11-25 14:33:04.792684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.867 [2024-11-25 14:33:04.792748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.867 [2024-11-25 14:33:04.792766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.867 [2024-11-25 14:33:04.792773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.867 [2024-11-25 14:33:04.792780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.867 [2024-11-25 14:33:04.792797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.867 qpair failed and we were unable to recover it. 00:34:59.867 [2024-11-25 14:33:04.802827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.867 [2024-11-25 14:33:04.802892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.867 [2024-11-25 14:33:04.802909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.867 [2024-11-25 14:33:04.802917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.867 [2024-11-25 14:33:04.802923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.867 [2024-11-25 14:33:04.802939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.867 qpair failed and we were unable to recover it. 00:34:59.867 [2024-11-25 14:33:04.812900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.867 [2024-11-25 14:33:04.812972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.867 [2024-11-25 14:33:04.812989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.867 [2024-11-25 14:33:04.812997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.867 [2024-11-25 14:33:04.813003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.867 [2024-11-25 14:33:04.813020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.867 qpair failed and we were unable to recover it. 00:34:59.867 [2024-11-25 14:33:04.822770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.867 [2024-11-25 14:33:04.822840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.867 [2024-11-25 14:33:04.822856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.867 [2024-11-25 14:33:04.822863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.867 [2024-11-25 14:33:04.822875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.867 [2024-11-25 14:33:04.822892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.867 qpair failed and we were unable to recover it. 00:34:59.867 [2024-11-25 14:33:04.832951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.867 [2024-11-25 14:33:04.833016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.867 [2024-11-25 14:33:04.833055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.867 [2024-11-25 14:33:04.833064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.867 [2024-11-25 14:33:04.833071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.867 [2024-11-25 14:33:04.833096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.867 qpair failed and we were unable to recover it. 00:34:59.867 [2024-11-25 14:33:04.842987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.867 [2024-11-25 14:33:04.843057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.867 [2024-11-25 14:33:04.843078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.867 [2024-11-25 14:33:04.843085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.867 [2024-11-25 14:33:04.843092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.867 [2024-11-25 14:33:04.843111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.867 qpair failed and we were unable to recover it. 00:34:59.867 [2024-11-25 14:33:04.853014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.867 [2024-11-25 14:33:04.853084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.867 [2024-11-25 14:33:04.853101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.867 [2024-11-25 14:33:04.853108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.867 [2024-11-25 14:33:04.853115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.867 [2024-11-25 14:33:04.853132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.867 qpair failed and we were unable to recover it. 00:34:59.867 [2024-11-25 14:33:04.862984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.867 [2024-11-25 14:33:04.863051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.867 [2024-11-25 14:33:04.863068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.867 [2024-11-25 14:33:04.863076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.867 [2024-11-25 14:33:04.863082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.867 [2024-11-25 14:33:04.863099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.867 qpair failed and we were unable to recover it. 00:34:59.867 [2024-11-25 14:33:04.873026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.867 [2024-11-25 14:33:04.873093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.867 [2024-11-25 14:33:04.873111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.867 [2024-11-25 14:33:04.873119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.867 [2024-11-25 14:33:04.873125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.867 [2024-11-25 14:33:04.873142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.867 qpair failed and we were unable to recover it. 00:34:59.867 [2024-11-25 14:33:04.883076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.867 [2024-11-25 14:33:04.883148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.867 [2024-11-25 14:33:04.883173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.867 [2024-11-25 14:33:04.883180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.867 [2024-11-25 14:33:04.883187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.867 [2024-11-25 14:33:04.883204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.867 qpair failed and we were unable to recover it. 00:34:59.867 [2024-11-25 14:33:04.893137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.867 [2024-11-25 14:33:04.893210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.867 [2024-11-25 14:33:04.893229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.868 [2024-11-25 14:33:04.893236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.868 [2024-11-25 14:33:04.893243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.868 [2024-11-25 14:33:04.893261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.868 qpair failed and we were unable to recover it. 00:34:59.868 [2024-11-25 14:33:04.903171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.868 [2024-11-25 14:33:04.903241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.868 [2024-11-25 14:33:04.903258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.868 [2024-11-25 14:33:04.903266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.868 [2024-11-25 14:33:04.903272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.868 [2024-11-25 14:33:04.903289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.868 qpair failed and we were unable to recover it. 00:34:59.868 [2024-11-25 14:33:04.913155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.868 [2024-11-25 14:33:04.913232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.868 [2024-11-25 14:33:04.913254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.868 [2024-11-25 14:33:04.913262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.868 [2024-11-25 14:33:04.913269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.868 [2024-11-25 14:33:04.913286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.868 qpair failed and we were unable to recover it. 00:34:59.868 [2024-11-25 14:33:04.923080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.868 [2024-11-25 14:33:04.923166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.868 [2024-11-25 14:33:04.923183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.868 [2024-11-25 14:33:04.923191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.868 [2024-11-25 14:33:04.923198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.868 [2024-11-25 14:33:04.923214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.868 qpair failed and we were unable to recover it. 00:34:59.868 [2024-11-25 14:33:04.933259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.868 [2024-11-25 14:33:04.933340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.868 [2024-11-25 14:33:04.933357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.868 [2024-11-25 14:33:04.933364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.868 [2024-11-25 14:33:04.933371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.868 [2024-11-25 14:33:04.933387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.868 qpair failed and we were unable to recover it. 00:34:59.868 [2024-11-25 14:33:04.943126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:59.868 [2024-11-25 14:33:04.943193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:59.868 [2024-11-25 14:33:04.943211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:59.868 [2024-11-25 14:33:04.943219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:59.868 [2024-11-25 14:33:04.943225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:34:59.868 [2024-11-25 14:33:04.943241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:59.868 qpair failed and we were unable to recover it. 00:35:00.133 [2024-11-25 14:33:04.953251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.133 [2024-11-25 14:33:04.953331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.133 [2024-11-25 14:33:04.953348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.133 [2024-11-25 14:33:04.953356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.133 [2024-11-25 14:33:04.953368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.133 [2024-11-25 14:33:04.953385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.133 qpair failed and we were unable to recover it. 00:35:00.133 [2024-11-25 14:33:04.963319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.133 [2024-11-25 14:33:04.963389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.133 [2024-11-25 14:33:04.963411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.133 [2024-11-25 14:33:04.963423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.133 [2024-11-25 14:33:04.963431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.133 [2024-11-25 14:33:04.963449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.133 qpair failed and we were unable to recover it. 00:35:00.133 [2024-11-25 14:33:04.973358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.133 [2024-11-25 14:33:04.973434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.133 [2024-11-25 14:33:04.973453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.133 [2024-11-25 14:33:04.973460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.133 [2024-11-25 14:33:04.973466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.133 [2024-11-25 14:33:04.973485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.133 qpair failed and we were unable to recover it. 00:35:00.133 [2024-11-25 14:33:04.983352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.133 [2024-11-25 14:33:04.983414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.133 [2024-11-25 14:33:04.983431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.133 [2024-11-25 14:33:04.983439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.133 [2024-11-25 14:33:04.983446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.133 [2024-11-25 14:33:04.983463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.133 qpair failed and we were unable to recover it. 00:35:00.133 [2024-11-25 14:33:04.993423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.133 [2024-11-25 14:33:04.993501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.133 [2024-11-25 14:33:04.993520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.133 [2024-11-25 14:33:04.993528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.133 [2024-11-25 14:33:04.993534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.133 [2024-11-25 14:33:04.993552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.133 qpair failed and we were unable to recover it. 00:35:00.133 [2024-11-25 14:33:05.003447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.133 [2024-11-25 14:33:05.003516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.133 [2024-11-25 14:33:05.003535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.133 [2024-11-25 14:33:05.003547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.133 [2024-11-25 14:33:05.003558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.133 [2024-11-25 14:33:05.003576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.133 qpair failed and we were unable to recover it. 00:35:00.133 [2024-11-25 14:33:05.013500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.133 [2024-11-25 14:33:05.013577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.133 [2024-11-25 14:33:05.013597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.133 [2024-11-25 14:33:05.013605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.133 [2024-11-25 14:33:05.013611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.133 [2024-11-25 14:33:05.013627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.133 qpair failed and we were unable to recover it. 00:35:00.133 [2024-11-25 14:33:05.023386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.133 [2024-11-25 14:33:05.023452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.133 [2024-11-25 14:33:05.023469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.133 [2024-11-25 14:33:05.023478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.133 [2024-11-25 14:33:05.023485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.133 [2024-11-25 14:33:05.023502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.133 qpair failed and we were unable to recover it. 00:35:00.133 [2024-11-25 14:33:05.033565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.133 [2024-11-25 14:33:05.033628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.133 [2024-11-25 14:33:05.033644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.133 [2024-11-25 14:33:05.033652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.133 [2024-11-25 14:33:05.033658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.133 [2024-11-25 14:33:05.033675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.133 qpair failed and we were unable to recover it. 00:35:00.133 [2024-11-25 14:33:05.043632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.133 [2024-11-25 14:33:05.043733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.133 [2024-11-25 14:33:05.043755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.133 [2024-11-25 14:33:05.043763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.133 [2024-11-25 14:33:05.043770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.133 [2024-11-25 14:33:05.043787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.133 qpair failed and we were unable to recover it. 00:35:00.133 [2024-11-25 14:33:05.053637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.133 [2024-11-25 14:33:05.053703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.133 [2024-11-25 14:33:05.053721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.133 [2024-11-25 14:33:05.053729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.133 [2024-11-25 14:33:05.053735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.133 [2024-11-25 14:33:05.053752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.133 qpair failed and we were unable to recover it. 00:35:00.133 [2024-11-25 14:33:05.063634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.133 [2024-11-25 14:33:05.063730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.133 [2024-11-25 14:33:05.063747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.134 [2024-11-25 14:33:05.063754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.134 [2024-11-25 14:33:05.063762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.134 [2024-11-25 14:33:05.063779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.134 qpair failed and we were unable to recover it. 00:35:00.134 [2024-11-25 14:33:05.073664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.134 [2024-11-25 14:33:05.073726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.134 [2024-11-25 14:33:05.073743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.134 [2024-11-25 14:33:05.073750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.134 [2024-11-25 14:33:05.073757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.134 [2024-11-25 14:33:05.073773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.134 qpair failed and we were unable to recover it. 00:35:00.134 [2024-11-25 14:33:05.083699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.134 [2024-11-25 14:33:05.083811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.134 [2024-11-25 14:33:05.083829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.134 [2024-11-25 14:33:05.083837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.134 [2024-11-25 14:33:05.083849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.134 [2024-11-25 14:33:05.083866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.134 qpair failed and we were unable to recover it. 00:35:00.134 [2024-11-25 14:33:05.093726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.134 [2024-11-25 14:33:05.093806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.134 [2024-11-25 14:33:05.093829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.134 [2024-11-25 14:33:05.093836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.134 [2024-11-25 14:33:05.093843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.134 [2024-11-25 14:33:05.093862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.134 qpair failed and we were unable to recover it. 00:35:00.134 [2024-11-25 14:33:05.103807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.134 [2024-11-25 14:33:05.103911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.134 [2024-11-25 14:33:05.103932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.134 [2024-11-25 14:33:05.103940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.134 [2024-11-25 14:33:05.103947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.134 [2024-11-25 14:33:05.103965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.134 qpair failed and we were unable to recover it. 00:35:00.134 [2024-11-25 14:33:05.113752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.134 [2024-11-25 14:33:05.113880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.134 [2024-11-25 14:33:05.113919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.134 [2024-11-25 14:33:05.113929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.134 [2024-11-25 14:33:05.113936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.134 [2024-11-25 14:33:05.113961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.134 qpair failed and we were unable to recover it. 00:35:00.134 [2024-11-25 14:33:05.123837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.134 [2024-11-25 14:33:05.123913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.134 [2024-11-25 14:33:05.123951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.134 [2024-11-25 14:33:05.123961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.134 [2024-11-25 14:33:05.123968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.134 [2024-11-25 14:33:05.123993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.134 qpair failed and we were unable to recover it. 00:35:00.134 [2024-11-25 14:33:05.133890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.134 [2024-11-25 14:33:05.133964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.134 [2024-11-25 14:33:05.134003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.134 [2024-11-25 14:33:05.134013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.134 [2024-11-25 14:33:05.134020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.134 [2024-11-25 14:33:05.134045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.134 qpair failed and we were unable to recover it. 00:35:00.134 [2024-11-25 14:33:05.143908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.134 [2024-11-25 14:33:05.144008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.134 [2024-11-25 14:33:05.144029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.134 [2024-11-25 14:33:05.144037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.134 [2024-11-25 14:33:05.144044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.134 [2024-11-25 14:33:05.144063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.134 qpair failed and we were unable to recover it. 00:35:00.134 [2024-11-25 14:33:05.153914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.134 [2024-11-25 14:33:05.153995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.134 [2024-11-25 14:33:05.154016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.134 [2024-11-25 14:33:05.154023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.134 [2024-11-25 14:33:05.154031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.134 [2024-11-25 14:33:05.154049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.134 qpair failed and we were unable to recover it. 00:35:00.134 [2024-11-25 14:33:05.163956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.134 [2024-11-25 14:33:05.164023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.134 [2024-11-25 14:33:05.164040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.134 [2024-11-25 14:33:05.164048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.134 [2024-11-25 14:33:05.164055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.134 [2024-11-25 14:33:05.164071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.134 qpair failed and we were unable to recover it. 00:35:00.134 [2024-11-25 14:33:05.174003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.134 [2024-11-25 14:33:05.174080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.134 [2024-11-25 14:33:05.174105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.134 [2024-11-25 14:33:05.174112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.134 [2024-11-25 14:33:05.174119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.134 [2024-11-25 14:33:05.174136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.134 qpair failed and we were unable to recover it. 00:35:00.134 [2024-11-25 14:33:05.184006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.134 [2024-11-25 14:33:05.184064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.134 [2024-11-25 14:33:05.184081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.134 [2024-11-25 14:33:05.184089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.134 [2024-11-25 14:33:05.184095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.134 [2024-11-25 14:33:05.184112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.134 qpair failed and we were unable to recover it. 00:35:00.134 [2024-11-25 14:33:05.194044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.134 [2024-11-25 14:33:05.194103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.134 [2024-11-25 14:33:05.194122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.135 [2024-11-25 14:33:05.194130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.135 [2024-11-25 14:33:05.194137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.135 [2024-11-25 14:33:05.194153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.135 qpair failed and we were unable to recover it. 00:35:00.135 [2024-11-25 14:33:05.204073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.135 [2024-11-25 14:33:05.204147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.135 [2024-11-25 14:33:05.204171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.135 [2024-11-25 14:33:05.204179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.135 [2024-11-25 14:33:05.204185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.135 [2024-11-25 14:33:05.204202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.135 qpair failed and we were unable to recover it. 00:35:00.135 [2024-11-25 14:33:05.214124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.135 [2024-11-25 14:33:05.214219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.135 [2024-11-25 14:33:05.214237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.135 [2024-11-25 14:33:05.214245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.135 [2024-11-25 14:33:05.214258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.135 [2024-11-25 14:33:05.214275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.135 qpair failed and we were unable to recover it. 00:35:00.398 [2024-11-25 14:33:05.224101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.398 [2024-11-25 14:33:05.224171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.398 [2024-11-25 14:33:05.224188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.398 [2024-11-25 14:33:05.224196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.398 [2024-11-25 14:33:05.224202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.398 [2024-11-25 14:33:05.224219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.398 qpair failed and we were unable to recover it. 00:35:00.398 [2024-11-25 14:33:05.234052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.398 [2024-11-25 14:33:05.234117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.398 [2024-11-25 14:33:05.234138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.398 [2024-11-25 14:33:05.234145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.398 [2024-11-25 14:33:05.234153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.398 [2024-11-25 14:33:05.234180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.398 qpair failed and we were unable to recover it. 00:35:00.398 [2024-11-25 14:33:05.244193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.398 [2024-11-25 14:33:05.244262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.398 [2024-11-25 14:33:05.244279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.398 [2024-11-25 14:33:05.244287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.398 [2024-11-25 14:33:05.244293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.398 [2024-11-25 14:33:05.244310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.398 qpair failed and we were unable to recover it. 00:35:00.398 [2024-11-25 14:33:05.254253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.398 [2024-11-25 14:33:05.254333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.398 [2024-11-25 14:33:05.254350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.398 [2024-11-25 14:33:05.254357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.398 [2024-11-25 14:33:05.254364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.398 [2024-11-25 14:33:05.254381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.398 qpair failed and we were unable to recover it. 00:35:00.398 [2024-11-25 14:33:05.264213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.398 [2024-11-25 14:33:05.264279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.398 [2024-11-25 14:33:05.264297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.398 [2024-11-25 14:33:05.264305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.398 [2024-11-25 14:33:05.264311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.398 [2024-11-25 14:33:05.264328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.398 qpair failed and we were unable to recover it. 00:35:00.398 [2024-11-25 14:33:05.274311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.398 [2024-11-25 14:33:05.274378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.398 [2024-11-25 14:33:05.274395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.398 [2024-11-25 14:33:05.274403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.398 [2024-11-25 14:33:05.274409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.398 [2024-11-25 14:33:05.274427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.398 qpair failed and we were unable to recover it. 00:35:00.398 [2024-11-25 14:33:05.284301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.398 [2024-11-25 14:33:05.284371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.398 [2024-11-25 14:33:05.284388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.398 [2024-11-25 14:33:05.284395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.398 [2024-11-25 14:33:05.284402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.398 [2024-11-25 14:33:05.284418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.398 qpair failed and we were unable to recover it. 00:35:00.398 [2024-11-25 14:33:05.294326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.398 [2024-11-25 14:33:05.294396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.398 [2024-11-25 14:33:05.294414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.398 [2024-11-25 14:33:05.294421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.398 [2024-11-25 14:33:05.294428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.398 [2024-11-25 14:33:05.294444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.398 qpair failed and we were unable to recover it. 00:35:00.398 [2024-11-25 14:33:05.304359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.398 [2024-11-25 14:33:05.304430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.398 [2024-11-25 14:33:05.304456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.398 [2024-11-25 14:33:05.304464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.398 [2024-11-25 14:33:05.304474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.398 [2024-11-25 14:33:05.304493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.398 qpair failed and we were unable to recover it. 00:35:00.398 [2024-11-25 14:33:05.314389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.399 [2024-11-25 14:33:05.314454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.399 [2024-11-25 14:33:05.314473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.399 [2024-11-25 14:33:05.314480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.399 [2024-11-25 14:33:05.314487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.399 [2024-11-25 14:33:05.314505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.399 qpair failed and we were unable to recover it. 00:35:00.399 [2024-11-25 14:33:05.324456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.399 [2024-11-25 14:33:05.324572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.399 [2024-11-25 14:33:05.324590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.399 [2024-11-25 14:33:05.324598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.399 [2024-11-25 14:33:05.324605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.399 [2024-11-25 14:33:05.324621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.399 qpair failed and we were unable to recover it. 00:35:00.399 [2024-11-25 14:33:05.334528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.399 [2024-11-25 14:33:05.334601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.399 [2024-11-25 14:33:05.334619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.399 [2024-11-25 14:33:05.334626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.399 [2024-11-25 14:33:05.334632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.399 [2024-11-25 14:33:05.334649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.399 qpair failed and we were unable to recover it. 00:35:00.399 [2024-11-25 14:33:05.344483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.399 [2024-11-25 14:33:05.344548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.399 [2024-11-25 14:33:05.344565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.399 [2024-11-25 14:33:05.344572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.399 [2024-11-25 14:33:05.344584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.399 [2024-11-25 14:33:05.344602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.399 qpair failed and we were unable to recover it. 00:35:00.399 [2024-11-25 14:33:05.354468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.399 [2024-11-25 14:33:05.354537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.399 [2024-11-25 14:33:05.354555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.399 [2024-11-25 14:33:05.354563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.399 [2024-11-25 14:33:05.354570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.399 [2024-11-25 14:33:05.354587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.399 qpair failed and we were unable to recover it. 00:35:00.399 [2024-11-25 14:33:05.364585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.399 [2024-11-25 14:33:05.364652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.399 [2024-11-25 14:33:05.364669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.399 [2024-11-25 14:33:05.364677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.399 [2024-11-25 14:33:05.364683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.399 [2024-11-25 14:33:05.364700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.399 qpair failed and we were unable to recover it. 00:35:00.399 [2024-11-25 14:33:05.374633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.399 [2024-11-25 14:33:05.374723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.399 [2024-11-25 14:33:05.374741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.399 [2024-11-25 14:33:05.374749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.399 [2024-11-25 14:33:05.374757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.399 [2024-11-25 14:33:05.374774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.399 qpair failed and we were unable to recover it. 00:35:00.399 [2024-11-25 14:33:05.384568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.399 [2024-11-25 14:33:05.384636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.399 [2024-11-25 14:33:05.384653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.399 [2024-11-25 14:33:05.384661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.399 [2024-11-25 14:33:05.384668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.399 [2024-11-25 14:33:05.384685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.399 qpair failed and we were unable to recover it. 00:35:00.399 [2024-11-25 14:33:05.394652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.399 [2024-11-25 14:33:05.394732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.399 [2024-11-25 14:33:05.394750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.399 [2024-11-25 14:33:05.394758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.399 [2024-11-25 14:33:05.394765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.399 [2024-11-25 14:33:05.394781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.399 qpair failed and we were unable to recover it. 00:35:00.399 [2024-11-25 14:33:05.404640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.399 [2024-11-25 14:33:05.404707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.399 [2024-11-25 14:33:05.404725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.399 [2024-11-25 14:33:05.404732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.399 [2024-11-25 14:33:05.404739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.399 [2024-11-25 14:33:05.404756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.399 qpair failed and we were unable to recover it. 00:35:00.399 [2024-11-25 14:33:05.414704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.399 [2024-11-25 14:33:05.414784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.399 [2024-11-25 14:33:05.414802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.399 [2024-11-25 14:33:05.414810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.400 [2024-11-25 14:33:05.414816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.400 [2024-11-25 14:33:05.414834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.400 qpair failed and we were unable to recover it. 00:35:00.400 [2024-11-25 14:33:05.424704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.400 [2024-11-25 14:33:05.424761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.400 [2024-11-25 14:33:05.424778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.400 [2024-11-25 14:33:05.424786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.400 [2024-11-25 14:33:05.424792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.400 [2024-11-25 14:33:05.424809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.400 qpair failed and we were unable to recover it. 00:35:00.400 [2024-11-25 14:33:05.434731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.400 [2024-11-25 14:33:05.434792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.400 [2024-11-25 14:33:05.434815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.400 [2024-11-25 14:33:05.434823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.400 [2024-11-25 14:33:05.434829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.400 [2024-11-25 14:33:05.434846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.400 qpair failed and we were unable to recover it. 00:35:00.400 [2024-11-25 14:33:05.444746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.400 [2024-11-25 14:33:05.444819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.400 [2024-11-25 14:33:05.444836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.400 [2024-11-25 14:33:05.444844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.400 [2024-11-25 14:33:05.444850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.400 [2024-11-25 14:33:05.444866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.400 qpair failed and we were unable to recover it. 00:35:00.400 [2024-11-25 14:33:05.454834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.400 [2024-11-25 14:33:05.454911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.400 [2024-11-25 14:33:05.454928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.400 [2024-11-25 14:33:05.454936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.400 [2024-11-25 14:33:05.454942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.400 [2024-11-25 14:33:05.454959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.400 qpair failed and we were unable to recover it. 00:35:00.400 [2024-11-25 14:33:05.464873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.400 [2024-11-25 14:33:05.464974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.400 [2024-11-25 14:33:05.465012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.400 [2024-11-25 14:33:05.465022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.400 [2024-11-25 14:33:05.465030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.400 [2024-11-25 14:33:05.465054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.400 qpair failed and we were unable to recover it. 00:35:00.400 [2024-11-25 14:33:05.474854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.400 [2024-11-25 14:33:05.474928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.400 [2024-11-25 14:33:05.474949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.400 [2024-11-25 14:33:05.474957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.400 [2024-11-25 14:33:05.474970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.400 [2024-11-25 14:33:05.474989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.400 qpair failed and we were unable to recover it. 00:35:00.400 [2024-11-25 14:33:05.484880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.400 [2024-11-25 14:33:05.484951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.400 [2024-11-25 14:33:05.484968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.400 [2024-11-25 14:33:05.484976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.698 [2024-11-25 14:33:05.484982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.699 [2024-11-25 14:33:05.485008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.699 qpair failed and we were unable to recover it. 00:35:00.699 [2024-11-25 14:33:05.494944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.699 [2024-11-25 14:33:05.495055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.699 [2024-11-25 14:33:05.495073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.699 [2024-11-25 14:33:05.495081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.699 [2024-11-25 14:33:05.495087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.699 [2024-11-25 14:33:05.495105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.699 qpair failed and we were unable to recover it. 00:35:00.699 [2024-11-25 14:33:05.504946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.699 [2024-11-25 14:33:05.505027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.699 [2024-11-25 14:33:05.505044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.699 [2024-11-25 14:33:05.505052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.699 [2024-11-25 14:33:05.505059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.699 [2024-11-25 14:33:05.505075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.699 qpair failed and we were unable to recover it. 00:35:00.699 [2024-11-25 14:33:05.514986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.699 [2024-11-25 14:33:05.515048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.699 [2024-11-25 14:33:05.515066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.699 [2024-11-25 14:33:05.515074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.699 [2024-11-25 14:33:05.515081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.699 [2024-11-25 14:33:05.515098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.699 qpair failed and we were unable to recover it. 00:35:00.699 [2024-11-25 14:33:05.525008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.699 [2024-11-25 14:33:05.525076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.699 [2024-11-25 14:33:05.525097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.699 [2024-11-25 14:33:05.525105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.699 [2024-11-25 14:33:05.525112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.699 [2024-11-25 14:33:05.525130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.699 qpair failed and we were unable to recover it. 00:35:00.699 [2024-11-25 14:33:05.535062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.699 [2024-11-25 14:33:05.535133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.699 [2024-11-25 14:33:05.535150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.699 [2024-11-25 14:33:05.535165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.699 [2024-11-25 14:33:05.535173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.699 [2024-11-25 14:33:05.535191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.699 qpair failed and we were unable to recover it. 00:35:00.699 [2024-11-25 14:33:05.545046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.699 [2024-11-25 14:33:05.545108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.699 [2024-11-25 14:33:05.545125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.699 [2024-11-25 14:33:05.545132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.699 [2024-11-25 14:33:05.545139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.699 [2024-11-25 14:33:05.545157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.699 qpair failed and we were unable to recover it. 00:35:00.699 [2024-11-25 14:33:05.555058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.699 [2024-11-25 14:33:05.555135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.699 [2024-11-25 14:33:05.555153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.699 [2024-11-25 14:33:05.555166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.699 [2024-11-25 14:33:05.555173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.699 [2024-11-25 14:33:05.555190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.699 qpair failed and we were unable to recover it. 00:35:00.699 [2024-11-25 14:33:05.565123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.699 [2024-11-25 14:33:05.565205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.699 [2024-11-25 14:33:05.565228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.699 [2024-11-25 14:33:05.565236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.699 [2024-11-25 14:33:05.565242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.699 [2024-11-25 14:33:05.565260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.699 qpair failed and we were unable to recover it. 00:35:00.699 [2024-11-25 14:33:05.575187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.699 [2024-11-25 14:33:05.575267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.699 [2024-11-25 14:33:05.575286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.699 [2024-11-25 14:33:05.575294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.699 [2024-11-25 14:33:05.575301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.699 [2024-11-25 14:33:05.575320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.699 qpair failed and we were unable to recover it. 00:35:00.699 [2024-11-25 14:33:05.585188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.699 [2024-11-25 14:33:05.585260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.699 [2024-11-25 14:33:05.585277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.699 [2024-11-25 14:33:05.585285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.699 [2024-11-25 14:33:05.585292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.699 [2024-11-25 14:33:05.585309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.699 qpair failed and we were unable to recover it. 00:35:00.699 [2024-11-25 14:33:05.595234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.700 [2024-11-25 14:33:05.595329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.700 [2024-11-25 14:33:05.595346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.700 [2024-11-25 14:33:05.595354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.700 [2024-11-25 14:33:05.595361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.700 [2024-11-25 14:33:05.595379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.700 qpair failed and we were unable to recover it. 00:35:00.700 [2024-11-25 14:33:05.605271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.700 [2024-11-25 14:33:05.605339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.700 [2024-11-25 14:33:05.605355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.700 [2024-11-25 14:33:05.605363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.700 [2024-11-25 14:33:05.605375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.700 [2024-11-25 14:33:05.605393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.700 qpair failed and we were unable to recover it. 00:35:00.700 [2024-11-25 14:33:05.615317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.700 [2024-11-25 14:33:05.615387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.700 [2024-11-25 14:33:05.615404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.700 [2024-11-25 14:33:05.615411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.700 [2024-11-25 14:33:05.615418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.700 [2024-11-25 14:33:05.615435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.700 qpair failed and we were unable to recover it. 00:35:00.700 [2024-11-25 14:33:05.625335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.700 [2024-11-25 14:33:05.625462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.700 [2024-11-25 14:33:05.625480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.700 [2024-11-25 14:33:05.625488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.700 [2024-11-25 14:33:05.625495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.700 [2024-11-25 14:33:05.625512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.700 qpair failed and we were unable to recover it. 00:35:00.700 [2024-11-25 14:33:05.635347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.700 [2024-11-25 14:33:05.635427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.700 [2024-11-25 14:33:05.635444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.700 [2024-11-25 14:33:05.635451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.700 [2024-11-25 14:33:05.635458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.700 [2024-11-25 14:33:05.635475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.700 qpair failed and we were unable to recover it. 00:35:00.700 [2024-11-25 14:33:05.645381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.700 [2024-11-25 14:33:05.645454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.700 [2024-11-25 14:33:05.645471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.700 [2024-11-25 14:33:05.645479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.700 [2024-11-25 14:33:05.645485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.700 [2024-11-25 14:33:05.645502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.700 qpair failed and we were unable to recover it. 00:35:00.700 [2024-11-25 14:33:05.655424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.700 [2024-11-25 14:33:05.655514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.700 [2024-11-25 14:33:05.655531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.700 [2024-11-25 14:33:05.655539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.700 [2024-11-25 14:33:05.655546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.700 [2024-11-25 14:33:05.655562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.700 qpair failed and we were unable to recover it. 00:35:00.700 [2024-11-25 14:33:05.665439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.700 [2024-11-25 14:33:05.665499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.700 [2024-11-25 14:33:05.665516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.700 [2024-11-25 14:33:05.665524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.700 [2024-11-25 14:33:05.665530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.700 [2024-11-25 14:33:05.665548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.700 qpair failed and we were unable to recover it. 00:35:00.700 [2024-11-25 14:33:05.675471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.700 [2024-11-25 14:33:05.675532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.700 [2024-11-25 14:33:05.675549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.700 [2024-11-25 14:33:05.675556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.700 [2024-11-25 14:33:05.675563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.700 [2024-11-25 14:33:05.675579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.700 qpair failed and we were unable to recover it. 00:35:00.700 [2024-11-25 14:33:05.685493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.700 [2024-11-25 14:33:05.685567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.700 [2024-11-25 14:33:05.685584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.700 [2024-11-25 14:33:05.685591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.700 [2024-11-25 14:33:05.685598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.700 [2024-11-25 14:33:05.685615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.700 qpair failed and we were unable to recover it. 00:35:00.700 [2024-11-25 14:33:05.695597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.700 [2024-11-25 14:33:05.695677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.700 [2024-11-25 14:33:05.695704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.701 [2024-11-25 14:33:05.695712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.701 [2024-11-25 14:33:05.695718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.701 [2024-11-25 14:33:05.695738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.701 qpair failed and we were unable to recover it. 00:35:00.701 [2024-11-25 14:33:05.705628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.701 [2024-11-25 14:33:05.705746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.701 [2024-11-25 14:33:05.705764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.701 [2024-11-25 14:33:05.705771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.701 [2024-11-25 14:33:05.705778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.701 [2024-11-25 14:33:05.705796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.701 qpair failed and we were unable to recover it. 00:35:00.701 [2024-11-25 14:33:05.715494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.701 [2024-11-25 14:33:05.715561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.701 [2024-11-25 14:33:05.715579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.701 [2024-11-25 14:33:05.715586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.701 [2024-11-25 14:33:05.715592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.701 [2024-11-25 14:33:05.715610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.701 qpair failed and we were unable to recover it. 00:35:00.701 [2024-11-25 14:33:05.725652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.701 [2024-11-25 14:33:05.725734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.701 [2024-11-25 14:33:05.725751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.701 [2024-11-25 14:33:05.725759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.701 [2024-11-25 14:33:05.725766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.701 [2024-11-25 14:33:05.725782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.701 qpair failed and we were unable to recover it. 00:35:00.701 [2024-11-25 14:33:05.735694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.701 [2024-11-25 14:33:05.735769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.701 [2024-11-25 14:33:05.735786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.701 [2024-11-25 14:33:05.735794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.701 [2024-11-25 14:33:05.735806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.701 [2024-11-25 14:33:05.735823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.701 qpair failed and we were unable to recover it. 00:35:00.701 [2024-11-25 14:33:05.745717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.701 [2024-11-25 14:33:05.745781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.701 [2024-11-25 14:33:05.745799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.701 [2024-11-25 14:33:05.745806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.701 [2024-11-25 14:33:05.745813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.701 [2024-11-25 14:33:05.745830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.701 qpair failed and we were unable to recover it. 00:35:00.701 [2024-11-25 14:33:05.755734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:00.701 [2024-11-25 14:33:05.755800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:00.701 [2024-11-25 14:33:05.755828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:00.701 [2024-11-25 14:33:05.755836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:00.701 [2024-11-25 14:33:05.755843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:00.701 [2024-11-25 14:33:05.755863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:00.701 qpair failed and we were unable to recover it. 00:35:01.030 [2024-11-25 14:33:05.765663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.030 [2024-11-25 14:33:05.765767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.030 [2024-11-25 14:33:05.765784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.030 [2024-11-25 14:33:05.765792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.030 [2024-11-25 14:33:05.765799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.030 [2024-11-25 14:33:05.765816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.030 qpair failed and we were unable to recover it. 00:35:01.030 [2024-11-25 14:33:05.775797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.030 [2024-11-25 14:33:05.775864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.030 [2024-11-25 14:33:05.775880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.030 [2024-11-25 14:33:05.775889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.030 [2024-11-25 14:33:05.775895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.030 [2024-11-25 14:33:05.775912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.030 qpair failed and we were unable to recover it. 00:35:01.030 [2024-11-25 14:33:05.785846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.031 [2024-11-25 14:33:05.785931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.031 [2024-11-25 14:33:05.785968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.031 [2024-11-25 14:33:05.785978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.031 [2024-11-25 14:33:05.785986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.031 [2024-11-25 14:33:05.786011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.031 qpair failed and we were unable to recover it. 00:35:01.031 [2024-11-25 14:33:05.795866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.031 [2024-11-25 14:33:05.795933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.031 [2024-11-25 14:33:05.795954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.031 [2024-11-25 14:33:05.795962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.031 [2024-11-25 14:33:05.795968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.031 [2024-11-25 14:33:05.795987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.031 qpair failed and we were unable to recover it. 00:35:01.031 [2024-11-25 14:33:05.805900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.031 [2024-11-25 14:33:05.805995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.031 [2024-11-25 14:33:05.806013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.031 [2024-11-25 14:33:05.806020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.031 [2024-11-25 14:33:05.806027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.031 [2024-11-25 14:33:05.806046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.031 qpair failed and we were unable to recover it. 00:35:01.031 [2024-11-25 14:33:05.815965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.031 [2024-11-25 14:33:05.816035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.031 [2024-11-25 14:33:05.816053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.031 [2024-11-25 14:33:05.816060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.031 [2024-11-25 14:33:05.816066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.031 [2024-11-25 14:33:05.816084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.031 qpair failed and we were unable to recover it. 00:35:01.031 [2024-11-25 14:33:05.825831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.031 [2024-11-25 14:33:05.825905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.031 [2024-11-25 14:33:05.825930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.031 [2024-11-25 14:33:05.825938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.031 [2024-11-25 14:33:05.825946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.031 [2024-11-25 14:33:05.825963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.031 qpair failed and we were unable to recover it. 00:35:01.031 [2024-11-25 14:33:05.835984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.031 [2024-11-25 14:33:05.836046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.031 [2024-11-25 14:33:05.836063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.031 [2024-11-25 14:33:05.836071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.031 [2024-11-25 14:33:05.836078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.031 [2024-11-25 14:33:05.836095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.031 qpair failed and we were unable to recover it. 00:35:01.031 [2024-11-25 14:33:05.845989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.031 [2024-11-25 14:33:05.846053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.031 [2024-11-25 14:33:05.846071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.031 [2024-11-25 14:33:05.846079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.031 [2024-11-25 14:33:05.846086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.031 [2024-11-25 14:33:05.846104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.031 qpair failed and we were unable to recover it. 00:35:01.031 [2024-11-25 14:33:05.856065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.031 [2024-11-25 14:33:05.856144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.031 [2024-11-25 14:33:05.856167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.031 [2024-11-25 14:33:05.856176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.031 [2024-11-25 14:33:05.856182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.031 [2024-11-25 14:33:05.856200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.031 qpair failed and we were unable to recover it. 00:35:01.031 [2024-11-25 14:33:05.866085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.031 [2024-11-25 14:33:05.866195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.031 [2024-11-25 14:33:05.866214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.031 [2024-11-25 14:33:05.866221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.031 [2024-11-25 14:33:05.866233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.031 [2024-11-25 14:33:05.866250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.031 qpair failed and we were unable to recover it. 00:35:01.031 [2024-11-25 14:33:05.876081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.031 [2024-11-25 14:33:05.876148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.031 [2024-11-25 14:33:05.876171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.031 [2024-11-25 14:33:05.876178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.031 [2024-11-25 14:33:05.876185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.031 [2024-11-25 14:33:05.876202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.031 qpair failed and we were unable to recover it. 00:35:01.031 [2024-11-25 14:33:05.886138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.031 [2024-11-25 14:33:05.886214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.031 [2024-11-25 14:33:05.886232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.031 [2024-11-25 14:33:05.886240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.031 [2024-11-25 14:33:05.886246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.031 [2024-11-25 14:33:05.886263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.031 qpair failed and we were unable to recover it. 00:35:01.031 [2024-11-25 14:33:05.896066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.031 [2024-11-25 14:33:05.896133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.031 [2024-11-25 14:33:05.896151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.031 [2024-11-25 14:33:05.896166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.031 [2024-11-25 14:33:05.896175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.031 [2024-11-25 14:33:05.896194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.031 qpair failed and we were unable to recover it. 00:35:01.031 [2024-11-25 14:33:05.906190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.031 [2024-11-25 14:33:05.906265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.031 [2024-11-25 14:33:05.906282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.031 [2024-11-25 14:33:05.906289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.031 [2024-11-25 14:33:05.906296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.031 [2024-11-25 14:33:05.906312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.031 qpair failed and we were unable to recover it. 00:35:01.031 [2024-11-25 14:33:05.916217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.032 [2024-11-25 14:33:05.916281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.032 [2024-11-25 14:33:05.916300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.032 [2024-11-25 14:33:05.916308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.032 [2024-11-25 14:33:05.916314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.032 [2024-11-25 14:33:05.916331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.032 qpair failed and we were unable to recover it. 00:35:01.032 [2024-11-25 14:33:05.926275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.032 [2024-11-25 14:33:05.926387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.032 [2024-11-25 14:33:05.926403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.032 [2024-11-25 14:33:05.926411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.032 [2024-11-25 14:33:05.926418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.032 [2024-11-25 14:33:05.926435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.032 qpair failed and we were unable to recover it. 00:35:01.032 [2024-11-25 14:33:05.936285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.032 [2024-11-25 14:33:05.936393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.032 [2024-11-25 14:33:05.936410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.032 [2024-11-25 14:33:05.936418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.032 [2024-11-25 14:33:05.936424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.032 [2024-11-25 14:33:05.936441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.032 qpair failed and we were unable to recover it. 00:35:01.032 [2024-11-25 14:33:05.946282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.032 [2024-11-25 14:33:05.946353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.032 [2024-11-25 14:33:05.946374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.032 [2024-11-25 14:33:05.946381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.032 [2024-11-25 14:33:05.946391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.032 [2024-11-25 14:33:05.946409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.032 qpair failed and we were unable to recover it. 00:35:01.032 [2024-11-25 14:33:05.956317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.032 [2024-11-25 14:33:05.956380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.032 [2024-11-25 14:33:05.956404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.032 [2024-11-25 14:33:05.956411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.032 [2024-11-25 14:33:05.956417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.032 [2024-11-25 14:33:05.956435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.032 qpair failed and we were unable to recover it. 00:35:01.032 [2024-11-25 14:33:05.966346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.032 [2024-11-25 14:33:05.966416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.032 [2024-11-25 14:33:05.966433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.032 [2024-11-25 14:33:05.966440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.032 [2024-11-25 14:33:05.966446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.032 [2024-11-25 14:33:05.966463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.032 qpair failed and we were unable to recover it. 00:35:01.032 [2024-11-25 14:33:05.976427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.032 [2024-11-25 14:33:05.976542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.032 [2024-11-25 14:33:05.976559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.032 [2024-11-25 14:33:05.976566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.032 [2024-11-25 14:33:05.976573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.032 [2024-11-25 14:33:05.976590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.032 qpair failed and we were unable to recover it. 00:35:01.032 [2024-11-25 14:33:05.986292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.032 [2024-11-25 14:33:05.986363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.032 [2024-11-25 14:33:05.986383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.032 [2024-11-25 14:33:05.986390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.032 [2024-11-25 14:33:05.986397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.032 [2024-11-25 14:33:05.986415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.032 qpair failed and we were unable to recover it. 00:35:01.032 [2024-11-25 14:33:05.996413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.032 [2024-11-25 14:33:05.996463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.032 [2024-11-25 14:33:05.996480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.032 [2024-11-25 14:33:05.996488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.032 [2024-11-25 14:33:05.996494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.032 [2024-11-25 14:33:05.996515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.032 qpair failed and we were unable to recover it. 00:35:01.032 [2024-11-25 14:33:06.006305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.032 [2024-11-25 14:33:06.006366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.032 [2024-11-25 14:33:06.006381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.032 [2024-11-25 14:33:06.006389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.032 [2024-11-25 14:33:06.006395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.032 [2024-11-25 14:33:06.006411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.032 qpair failed and we were unable to recover it. 00:35:01.032 [2024-11-25 14:33:06.016499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.032 [2024-11-25 14:33:06.016580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.032 [2024-11-25 14:33:06.016594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.032 [2024-11-25 14:33:06.016601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.032 [2024-11-25 14:33:06.016608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.032 [2024-11-25 14:33:06.016623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.032 qpair failed and we were unable to recover it. 00:35:01.032 [2024-11-25 14:33:06.026545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.032 [2024-11-25 14:33:06.026621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.032 [2024-11-25 14:33:06.026636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.032 [2024-11-25 14:33:06.026644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.032 [2024-11-25 14:33:06.026650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.032 [2024-11-25 14:33:06.026665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.032 qpair failed and we were unable to recover it. 00:35:01.032 [2024-11-25 14:33:06.036524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.032 [2024-11-25 14:33:06.036573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.032 [2024-11-25 14:33:06.036588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.032 [2024-11-25 14:33:06.036595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.032 [2024-11-25 14:33:06.036601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.032 [2024-11-25 14:33:06.036616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.032 qpair failed and we were unable to recover it. 00:35:01.032 [2024-11-25 14:33:06.046487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.033 [2024-11-25 14:33:06.046573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.033 [2024-11-25 14:33:06.046588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.033 [2024-11-25 14:33:06.046595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.033 [2024-11-25 14:33:06.046601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.033 [2024-11-25 14:33:06.046616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.033 qpair failed and we were unable to recover it. 00:35:01.033 [2024-11-25 14:33:06.056599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.033 [2024-11-25 14:33:06.056653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.033 [2024-11-25 14:33:06.056667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.033 [2024-11-25 14:33:06.056675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.033 [2024-11-25 14:33:06.056681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.033 [2024-11-25 14:33:06.056696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.033 qpair failed and we were unable to recover it. 00:35:01.033 [2024-11-25 14:33:06.066580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.033 [2024-11-25 14:33:06.066642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.033 [2024-11-25 14:33:06.066659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.033 [2024-11-25 14:33:06.066666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.033 [2024-11-25 14:33:06.066673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.033 [2024-11-25 14:33:06.066692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.033 qpair failed and we were unable to recover it. 00:35:01.033 [2024-11-25 14:33:06.076604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.033 [2024-11-25 14:33:06.076654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.033 [2024-11-25 14:33:06.076670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.033 [2024-11-25 14:33:06.076677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.033 [2024-11-25 14:33:06.076684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.033 [2024-11-25 14:33:06.076698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.033 qpair failed and we were unable to recover it. 00:35:01.033 [2024-11-25 14:33:06.086603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.033 [2024-11-25 14:33:06.086653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.033 [2024-11-25 14:33:06.086676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.033 [2024-11-25 14:33:06.086684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.033 [2024-11-25 14:33:06.086691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.033 [2024-11-25 14:33:06.086708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.033 qpair failed and we were unable to recover it. 00:35:01.033 [2024-11-25 14:33:06.096552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.033 [2024-11-25 14:33:06.096606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.033 [2024-11-25 14:33:06.096621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.033 [2024-11-25 14:33:06.096629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.033 [2024-11-25 14:33:06.096635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.033 [2024-11-25 14:33:06.096650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.033 qpair failed and we were unable to recover it. 00:35:01.033 [2024-11-25 14:33:06.106716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.033 [2024-11-25 14:33:06.106808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.033 [2024-11-25 14:33:06.106822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.033 [2024-11-25 14:33:06.106829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.033 [2024-11-25 14:33:06.106836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.033 [2024-11-25 14:33:06.106850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.033 qpair failed and we were unable to recover it. 00:35:01.334 [2024-11-25 14:33:06.116715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.335 [2024-11-25 14:33:06.116768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.335 [2024-11-25 14:33:06.116782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.335 [2024-11-25 14:33:06.116789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.335 [2024-11-25 14:33:06.116796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.335 [2024-11-25 14:33:06.116810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.335 qpair failed and we were unable to recover it. 00:35:01.335 [2024-11-25 14:33:06.126668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.335 [2024-11-25 14:33:06.126718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.335 [2024-11-25 14:33:06.126732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.335 [2024-11-25 14:33:06.126739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.335 [2024-11-25 14:33:06.126745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.335 [2024-11-25 14:33:06.126763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.335 qpair failed and we were unable to recover it. 00:35:01.335 [2024-11-25 14:33:06.136746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.335 [2024-11-25 14:33:06.136817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.335 [2024-11-25 14:33:06.136831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.335 [2024-11-25 14:33:06.136838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.335 [2024-11-25 14:33:06.136845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.335 [2024-11-25 14:33:06.136859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.335 qpair failed and we were unable to recover it. 00:35:01.335 [2024-11-25 14:33:06.146784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.335 [2024-11-25 14:33:06.146843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.335 [2024-11-25 14:33:06.146856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.335 [2024-11-25 14:33:06.146863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.335 [2024-11-25 14:33:06.146870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.335 [2024-11-25 14:33:06.146883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.335 qpair failed and we were unable to recover it. 00:35:01.335 [2024-11-25 14:33:06.156847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.335 [2024-11-25 14:33:06.156902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.335 [2024-11-25 14:33:06.156915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.335 [2024-11-25 14:33:06.156922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.335 [2024-11-25 14:33:06.156928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.335 [2024-11-25 14:33:06.156943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.335 qpair failed and we were unable to recover it. 00:35:01.335 [2024-11-25 14:33:06.166812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.335 [2024-11-25 14:33:06.166863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.335 [2024-11-25 14:33:06.166876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.335 [2024-11-25 14:33:06.166883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.335 [2024-11-25 14:33:06.166889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.335 [2024-11-25 14:33:06.166903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.335 qpair failed and we were unable to recover it. 00:35:01.335 [2024-11-25 14:33:06.176867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.335 [2024-11-25 14:33:06.176930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.335 [2024-11-25 14:33:06.176956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.335 [2024-11-25 14:33:06.176964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.335 [2024-11-25 14:33:06.176971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.335 [2024-11-25 14:33:06.176990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.335 qpair failed and we were unable to recover it. 00:35:01.335 [2024-11-25 14:33:06.186849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.335 [2024-11-25 14:33:06.186903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.335 [2024-11-25 14:33:06.186929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.335 [2024-11-25 14:33:06.186938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.335 [2024-11-25 14:33:06.186944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.335 [2024-11-25 14:33:06.186964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.335 qpair failed and we were unable to recover it. 00:35:01.335 [2024-11-25 14:33:06.196799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.335 [2024-11-25 14:33:06.196851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.335 [2024-11-25 14:33:06.196866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.335 [2024-11-25 14:33:06.196873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.335 [2024-11-25 14:33:06.196880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.335 [2024-11-25 14:33:06.196895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.335 qpair failed and we were unable to recover it. 00:35:01.335 [2024-11-25 14:33:06.206895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.335 [2024-11-25 14:33:06.206956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.335 [2024-11-25 14:33:06.206981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.335 [2024-11-25 14:33:06.206990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.335 [2024-11-25 14:33:06.206997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.335 [2024-11-25 14:33:06.207017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.335 qpair failed and we were unable to recover it. 00:35:01.335 [2024-11-25 14:33:06.216965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.335 [2024-11-25 14:33:06.217035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.335 [2024-11-25 14:33:06.217054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.335 [2024-11-25 14:33:06.217062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.335 [2024-11-25 14:33:06.217069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.335 [2024-11-25 14:33:06.217084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.335 qpair failed and we were unable to recover it. 00:35:01.335 [2024-11-25 14:33:06.226984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.335 [2024-11-25 14:33:06.227061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.335 [2024-11-25 14:33:06.227075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.335 [2024-11-25 14:33:06.227082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.335 [2024-11-25 14:33:06.227088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.335 [2024-11-25 14:33:06.227103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.335 qpair failed and we were unable to recover it. 00:35:01.335 [2024-11-25 14:33:06.237019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.335 [2024-11-25 14:33:06.237102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.335 [2024-11-25 14:33:06.237116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.335 [2024-11-25 14:33:06.237123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.335 [2024-11-25 14:33:06.237129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.335 [2024-11-25 14:33:06.237143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.335 qpair failed and we were unable to recover it. 00:35:01.336 [2024-11-25 14:33:06.246889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.336 [2024-11-25 14:33:06.246937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.336 [2024-11-25 14:33:06.246950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.336 [2024-11-25 14:33:06.246957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.336 [2024-11-25 14:33:06.246963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.336 [2024-11-25 14:33:06.246977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.336 qpair failed and we were unable to recover it. 00:35:01.336 [2024-11-25 14:33:06.257125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.336 [2024-11-25 14:33:06.257214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.336 [2024-11-25 14:33:06.257227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.336 [2024-11-25 14:33:06.257234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.336 [2024-11-25 14:33:06.257241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.336 [2024-11-25 14:33:06.257258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.336 qpair failed and we were unable to recover it. 00:35:01.336 [2024-11-25 14:33:06.267076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.336 [2024-11-25 14:33:06.267120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.336 [2024-11-25 14:33:06.267133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.336 [2024-11-25 14:33:06.267140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.336 [2024-11-25 14:33:06.267147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.336 [2024-11-25 14:33:06.267163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.336 qpair failed and we were unable to recover it. 00:35:01.336 [2024-11-25 14:33:06.277115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.336 [2024-11-25 14:33:06.277166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.336 [2024-11-25 14:33:06.277179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.336 [2024-11-25 14:33:06.277186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.336 [2024-11-25 14:33:06.277192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.336 [2024-11-25 14:33:06.277206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.336 qpair failed and we were unable to recover it. 00:35:01.336 [2024-11-25 14:33:06.287111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.336 [2024-11-25 14:33:06.287168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.336 [2024-11-25 14:33:06.287182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.336 [2024-11-25 14:33:06.287189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.336 [2024-11-25 14:33:06.287196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.336 [2024-11-25 14:33:06.287210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.336 qpair failed and we were unable to recover it. 00:35:01.336 [2024-11-25 14:33:06.297200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.336 [2024-11-25 14:33:06.297282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.336 [2024-11-25 14:33:06.297295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.336 [2024-11-25 14:33:06.297302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.336 [2024-11-25 14:33:06.297308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.336 [2024-11-25 14:33:06.297322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.336 qpair failed and we were unable to recover it. 00:35:01.336 [2024-11-25 14:33:06.307201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.336 [2024-11-25 14:33:06.307249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.336 [2024-11-25 14:33:06.307263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.336 [2024-11-25 14:33:06.307270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.336 [2024-11-25 14:33:06.307276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.336 [2024-11-25 14:33:06.307290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.336 qpair failed and we were unable to recover it. 00:35:01.336 [2024-11-25 14:33:06.317232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.336 [2024-11-25 14:33:06.317277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.336 [2024-11-25 14:33:06.317291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.336 [2024-11-25 14:33:06.317298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.336 [2024-11-25 14:33:06.317304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.336 [2024-11-25 14:33:06.317318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.336 qpair failed and we were unable to recover it. 00:35:01.336 [2024-11-25 14:33:06.327216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.336 [2024-11-25 14:33:06.327260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.336 [2024-11-25 14:33:06.327274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.336 [2024-11-25 14:33:06.327281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.336 [2024-11-25 14:33:06.327287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.336 [2024-11-25 14:33:06.327301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.336 qpair failed and we were unable to recover it. 00:35:01.336 [2024-11-25 14:33:06.337293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.336 [2024-11-25 14:33:06.337365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.336 [2024-11-25 14:33:06.337378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.336 [2024-11-25 14:33:06.337386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.336 [2024-11-25 14:33:06.337392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.336 [2024-11-25 14:33:06.337405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.336 qpair failed and we were unable to recover it. 00:35:01.336 [2024-11-25 14:33:06.347337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.336 [2024-11-25 14:33:06.347428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.336 [2024-11-25 14:33:06.347444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.336 [2024-11-25 14:33:06.347452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.336 [2024-11-25 14:33:06.347458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.336 [2024-11-25 14:33:06.347472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.336 qpair failed and we were unable to recover it. 00:35:01.336 [2024-11-25 14:33:06.357348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.336 [2024-11-25 14:33:06.357399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.336 [2024-11-25 14:33:06.357413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.336 [2024-11-25 14:33:06.357420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.336 [2024-11-25 14:33:06.357426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.336 [2024-11-25 14:33:06.357439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.336 qpair failed and we were unable to recover it. 00:35:01.336 [2024-11-25 14:33:06.367304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.336 [2024-11-25 14:33:06.367351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.336 [2024-11-25 14:33:06.367364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.336 [2024-11-25 14:33:06.367371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.336 [2024-11-25 14:33:06.367378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.336 [2024-11-25 14:33:06.367391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.336 qpair failed and we were unable to recover it. 00:35:01.336 [2024-11-25 14:33:06.377457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.337 [2024-11-25 14:33:06.377524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.337 [2024-11-25 14:33:06.377537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.337 [2024-11-25 14:33:06.377544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.337 [2024-11-25 14:33:06.377550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.337 [2024-11-25 14:33:06.377563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.337 qpair failed and we were unable to recover it. 00:35:01.337 [2024-11-25 14:33:06.387448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.337 [2024-11-25 14:33:06.387501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.337 [2024-11-25 14:33:06.387514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.337 [2024-11-25 14:33:06.387521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.337 [2024-11-25 14:33:06.387527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.337 [2024-11-25 14:33:06.387544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.337 qpair failed and we were unable to recover it. 00:35:01.337 [2024-11-25 14:33:06.397473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.337 [2024-11-25 14:33:06.397526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.337 [2024-11-25 14:33:06.397541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.337 [2024-11-25 14:33:06.397549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.337 [2024-11-25 14:33:06.397555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.337 [2024-11-25 14:33:06.397572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.337 qpair failed and we were unable to recover it. 00:35:01.337 [2024-11-25 14:33:06.407443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.337 [2024-11-25 14:33:06.407488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.337 [2024-11-25 14:33:06.407502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.337 [2024-11-25 14:33:06.407509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.337 [2024-11-25 14:33:06.407515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.337 [2024-11-25 14:33:06.407529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.337 qpair failed and we were unable to recover it. 00:35:01.337 [2024-11-25 14:33:06.417487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.337 [2024-11-25 14:33:06.417542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.337 [2024-11-25 14:33:06.417555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.337 [2024-11-25 14:33:06.417562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.337 [2024-11-25 14:33:06.417568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.337 [2024-11-25 14:33:06.417582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.337 qpair failed and we were unable to recover it. 00:35:01.600 [2024-11-25 14:33:06.427528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.600 [2024-11-25 14:33:06.427596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.600 [2024-11-25 14:33:06.427609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.600 [2024-11-25 14:33:06.427616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.600 [2024-11-25 14:33:06.427622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.600 [2024-11-25 14:33:06.427636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.600 qpair failed and we were unable to recover it. 00:35:01.600 [2024-11-25 14:33:06.437442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.600 [2024-11-25 14:33:06.437500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.600 [2024-11-25 14:33:06.437515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.600 [2024-11-25 14:33:06.437522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.600 [2024-11-25 14:33:06.437528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.600 [2024-11-25 14:33:06.437543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.600 qpair failed and we were unable to recover it. 00:35:01.600 [2024-11-25 14:33:06.447531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.600 [2024-11-25 14:33:06.447575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.600 [2024-11-25 14:33:06.447590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.600 [2024-11-25 14:33:06.447597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.600 [2024-11-25 14:33:06.447604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.600 [2024-11-25 14:33:06.447617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.600 qpair failed and we were unable to recover it. 00:35:01.600 [2024-11-25 14:33:06.457587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.600 [2024-11-25 14:33:06.457639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.600 [2024-11-25 14:33:06.457652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.600 [2024-11-25 14:33:06.457660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.600 [2024-11-25 14:33:06.457666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.600 [2024-11-25 14:33:06.457680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.600 qpair failed and we were unable to recover it. 00:35:01.600 [2024-11-25 14:33:06.467646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.600 [2024-11-25 14:33:06.467698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.600 [2024-11-25 14:33:06.467711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.600 [2024-11-25 14:33:06.467718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.600 [2024-11-25 14:33:06.467725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.600 [2024-11-25 14:33:06.467738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.600 qpair failed and we were unable to recover it. 00:35:01.600 [2024-11-25 14:33:06.477660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.600 [2024-11-25 14:33:06.477708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.600 [2024-11-25 14:33:06.477725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.600 [2024-11-25 14:33:06.477733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.600 [2024-11-25 14:33:06.477739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.600 [2024-11-25 14:33:06.477753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.600 qpair failed and we were unable to recover it. 00:35:01.600 [2024-11-25 14:33:06.487663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.600 [2024-11-25 14:33:06.487708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.600 [2024-11-25 14:33:06.487722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.600 [2024-11-25 14:33:06.487729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.600 [2024-11-25 14:33:06.487735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.600 [2024-11-25 14:33:06.487749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.600 qpair failed and we were unable to recover it. 00:35:01.600 [2024-11-25 14:33:06.497726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.600 [2024-11-25 14:33:06.497780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.600 [2024-11-25 14:33:06.497794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.600 [2024-11-25 14:33:06.497800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.600 [2024-11-25 14:33:06.497807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.600 [2024-11-25 14:33:06.497820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.600 qpair failed and we were unable to recover it. 00:35:01.600 [2024-11-25 14:33:06.507752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.600 [2024-11-25 14:33:06.507799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.600 [2024-11-25 14:33:06.507813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.600 [2024-11-25 14:33:06.507819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.600 [2024-11-25 14:33:06.507826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.600 [2024-11-25 14:33:06.507839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.600 qpair failed and we were unable to recover it. 00:35:01.600 [2024-11-25 14:33:06.517780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.600 [2024-11-25 14:33:06.517885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.600 [2024-11-25 14:33:06.517897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.601 [2024-11-25 14:33:06.517905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.601 [2024-11-25 14:33:06.517911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.601 [2024-11-25 14:33:06.517928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.601 qpair failed and we were unable to recover it. 00:35:01.601 [2024-11-25 14:33:06.527745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.601 [2024-11-25 14:33:06.527791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.601 [2024-11-25 14:33:06.527804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.601 [2024-11-25 14:33:06.527811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.601 [2024-11-25 14:33:06.527817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.601 [2024-11-25 14:33:06.527831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.601 qpair failed and we were unable to recover it. 00:35:01.601 [2024-11-25 14:33:06.537716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.601 [2024-11-25 14:33:06.537772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.601 [2024-11-25 14:33:06.537786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.601 [2024-11-25 14:33:06.537793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.601 [2024-11-25 14:33:06.537799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.601 [2024-11-25 14:33:06.537812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.601 qpair failed and we were unable to recover it. 00:35:01.601 [2024-11-25 14:33:06.547849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.601 [2024-11-25 14:33:06.547901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.601 [2024-11-25 14:33:06.547914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.601 [2024-11-25 14:33:06.547921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.601 [2024-11-25 14:33:06.547928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.601 [2024-11-25 14:33:06.547941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.601 qpair failed and we were unable to recover it. 00:35:01.601 [2024-11-25 14:33:06.557917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.601 [2024-11-25 14:33:06.558003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.601 [2024-11-25 14:33:06.558016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.601 [2024-11-25 14:33:06.558023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.601 [2024-11-25 14:33:06.558029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.601 [2024-11-25 14:33:06.558043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.601 qpair failed and we were unable to recover it. 00:35:01.601 [2024-11-25 14:33:06.567871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.601 [2024-11-25 14:33:06.567917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.601 [2024-11-25 14:33:06.567931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.601 [2024-11-25 14:33:06.567938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.601 [2024-11-25 14:33:06.567944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.601 [2024-11-25 14:33:06.567958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.601 qpair failed and we were unable to recover it. 00:35:01.601 [2024-11-25 14:33:06.577926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.601 [2024-11-25 14:33:06.577987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.601 [2024-11-25 14:33:06.578001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.601 [2024-11-25 14:33:06.578007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.601 [2024-11-25 14:33:06.578014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.601 [2024-11-25 14:33:06.578027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.601 qpair failed and we were unable to recover it. 00:35:01.601 [2024-11-25 14:33:06.587976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.601 [2024-11-25 14:33:06.588022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.601 [2024-11-25 14:33:06.588035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.601 [2024-11-25 14:33:06.588042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.601 [2024-11-25 14:33:06.588049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.601 [2024-11-25 14:33:06.588063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.601 qpair failed and we were unable to recover it. 00:35:01.601 [2024-11-25 14:33:06.597996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.601 [2024-11-25 14:33:06.598092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.601 [2024-11-25 14:33:06.598105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.601 [2024-11-25 14:33:06.598112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.601 [2024-11-25 14:33:06.598118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.601 [2024-11-25 14:33:06.598132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.601 qpair failed and we were unable to recover it. 00:35:01.601 [2024-11-25 14:33:06.607987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.601 [2024-11-25 14:33:06.608030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.601 [2024-11-25 14:33:06.608046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.601 [2024-11-25 14:33:06.608053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.601 [2024-11-25 14:33:06.608060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.601 [2024-11-25 14:33:06.608073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.601 qpair failed and we were unable to recover it. 00:35:01.601 [2024-11-25 14:33:06.618046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.601 [2024-11-25 14:33:06.618129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.601 [2024-11-25 14:33:06.618142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.601 [2024-11-25 14:33:06.618150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.601 [2024-11-25 14:33:06.618156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.601 [2024-11-25 14:33:06.618174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.601 qpair failed and we were unable to recover it. 00:35:01.601 [2024-11-25 14:33:06.628077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.601 [2024-11-25 14:33:06.628161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.601 [2024-11-25 14:33:06.628175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.601 [2024-11-25 14:33:06.628182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.601 [2024-11-25 14:33:06.628188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.601 [2024-11-25 14:33:06.628202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.601 qpair failed and we were unable to recover it. 00:35:01.601 [2024-11-25 14:33:06.638106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.601 [2024-11-25 14:33:06.638164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.601 [2024-11-25 14:33:06.638178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.601 [2024-11-25 14:33:06.638185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.601 [2024-11-25 14:33:06.638191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.601 [2024-11-25 14:33:06.638204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.601 qpair failed and we were unable to recover it. 00:35:01.601 [2024-11-25 14:33:06.648056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.601 [2024-11-25 14:33:06.648102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.601 [2024-11-25 14:33:06.648115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.602 [2024-11-25 14:33:06.648122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.602 [2024-11-25 14:33:06.648128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.602 [2024-11-25 14:33:06.648145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.602 qpair failed and we were unable to recover it. 00:35:01.602 [2024-11-25 14:33:06.658180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.602 [2024-11-25 14:33:06.658285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.602 [2024-11-25 14:33:06.658300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.602 [2024-11-25 14:33:06.658307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.602 [2024-11-25 14:33:06.658313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.602 [2024-11-25 14:33:06.658327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.602 qpair failed and we were unable to recover it. 00:35:01.602 [2024-11-25 14:33:06.668204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.602 [2024-11-25 14:33:06.668292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.602 [2024-11-25 14:33:06.668306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.602 [2024-11-25 14:33:06.668313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.602 [2024-11-25 14:33:06.668319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.602 [2024-11-25 14:33:06.668332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.602 qpair failed and we were unable to recover it. 00:35:01.602 [2024-11-25 14:33:06.678218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.602 [2024-11-25 14:33:06.678268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.602 [2024-11-25 14:33:06.678281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.602 [2024-11-25 14:33:06.678288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.602 [2024-11-25 14:33:06.678295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.602 [2024-11-25 14:33:06.678308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.602 qpair failed and we were unable to recover it. 00:35:01.864 [2024-11-25 14:33:06.688256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.865 [2024-11-25 14:33:06.688336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.865 [2024-11-25 14:33:06.688350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.865 [2024-11-25 14:33:06.688357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.865 [2024-11-25 14:33:06.688363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.865 [2024-11-25 14:33:06.688376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.865 qpair failed and we were unable to recover it. 00:35:01.865 [2024-11-25 14:33:06.698256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.865 [2024-11-25 14:33:06.698304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.865 [2024-11-25 14:33:06.698317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.865 [2024-11-25 14:33:06.698324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.865 [2024-11-25 14:33:06.698330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.865 [2024-11-25 14:33:06.698344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.865 qpair failed and we were unable to recover it. 00:35:01.865 [2024-11-25 14:33:06.708294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.865 [2024-11-25 14:33:06.708344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.865 [2024-11-25 14:33:06.708358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.865 [2024-11-25 14:33:06.708364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.865 [2024-11-25 14:33:06.708371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.865 [2024-11-25 14:33:06.708384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.865 qpair failed and we were unable to recover it. 00:35:01.865 [2024-11-25 14:33:06.718309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.865 [2024-11-25 14:33:06.718367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.865 [2024-11-25 14:33:06.718381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.865 [2024-11-25 14:33:06.718388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.865 [2024-11-25 14:33:06.718394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.865 [2024-11-25 14:33:06.718408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.865 qpair failed and we were unable to recover it. 00:35:01.865 [2024-11-25 14:33:06.728331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.865 [2024-11-25 14:33:06.728395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.865 [2024-11-25 14:33:06.728409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.865 [2024-11-25 14:33:06.728416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.865 [2024-11-25 14:33:06.728422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.865 [2024-11-25 14:33:06.728435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.865 qpair failed and we were unable to recover it. 00:35:01.865 [2024-11-25 14:33:06.738401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.865 [2024-11-25 14:33:06.738488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.865 [2024-11-25 14:33:06.738505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.865 [2024-11-25 14:33:06.738512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.865 [2024-11-25 14:33:06.738519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.865 [2024-11-25 14:33:06.738534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.865 qpair failed and we were unable to recover it. 00:35:01.865 [2024-11-25 14:33:06.748398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.865 [2024-11-25 14:33:06.748446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.865 [2024-11-25 14:33:06.748460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.865 [2024-11-25 14:33:06.748467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.865 [2024-11-25 14:33:06.748473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.865 [2024-11-25 14:33:06.748487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.865 qpair failed and we were unable to recover it. 00:35:01.865 [2024-11-25 14:33:06.758432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.865 [2024-11-25 14:33:06.758483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.865 [2024-11-25 14:33:06.758495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.865 [2024-11-25 14:33:06.758503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.865 [2024-11-25 14:33:06.758509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.865 [2024-11-25 14:33:06.758523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.865 qpair failed and we were unable to recover it. 00:35:01.865 [2024-11-25 14:33:06.768481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.865 [2024-11-25 14:33:06.768555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.865 [2024-11-25 14:33:06.768569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.865 [2024-11-25 14:33:06.768576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.865 [2024-11-25 14:33:06.768583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.865 [2024-11-25 14:33:06.768598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.865 qpair failed and we were unable to recover it. 00:35:01.865 [2024-11-25 14:33:06.778486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.865 [2024-11-25 14:33:06.778542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.865 [2024-11-25 14:33:06.778556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.865 [2024-11-25 14:33:06.778563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.865 [2024-11-25 14:33:06.778569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.865 [2024-11-25 14:33:06.778586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.865 qpair failed and we were unable to recover it. 00:35:01.865 [2024-11-25 14:33:06.788500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.865 [2024-11-25 14:33:06.788563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.865 [2024-11-25 14:33:06.788576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.865 [2024-11-25 14:33:06.788583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.865 [2024-11-25 14:33:06.788589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.865 [2024-11-25 14:33:06.788603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.865 qpair failed and we were unable to recover it. 00:35:01.865 [2024-11-25 14:33:06.798513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.865 [2024-11-25 14:33:06.798567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.865 [2024-11-25 14:33:06.798580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.865 [2024-11-25 14:33:06.798588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.865 [2024-11-25 14:33:06.798594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.865 [2024-11-25 14:33:06.798607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.865 qpair failed and we were unable to recover it. 00:35:01.865 [2024-11-25 14:33:06.808509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.865 [2024-11-25 14:33:06.808555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.865 [2024-11-25 14:33:06.808569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.865 [2024-11-25 14:33:06.808576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.865 [2024-11-25 14:33:06.808582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.865 [2024-11-25 14:33:06.808596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.865 qpair failed and we were unable to recover it. 00:35:01.866 [2024-11-25 14:33:06.818614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.866 [2024-11-25 14:33:06.818715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.866 [2024-11-25 14:33:06.818728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.866 [2024-11-25 14:33:06.818736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.866 [2024-11-25 14:33:06.818742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.866 [2024-11-25 14:33:06.818755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.866 qpair failed and we were unable to recover it. 00:35:01.866 [2024-11-25 14:33:06.828642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.866 [2024-11-25 14:33:06.828691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.866 [2024-11-25 14:33:06.828704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.866 [2024-11-25 14:33:06.828711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.866 [2024-11-25 14:33:06.828718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.866 [2024-11-25 14:33:06.828731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.866 qpair failed and we were unable to recover it. 00:35:01.866 [2024-11-25 14:33:06.838633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.866 [2024-11-25 14:33:06.838683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.866 [2024-11-25 14:33:06.838696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.866 [2024-11-25 14:33:06.838703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.866 [2024-11-25 14:33:06.838710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.866 [2024-11-25 14:33:06.838724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.866 qpair failed and we were unable to recover it. 00:35:01.866 [2024-11-25 14:33:06.848638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.866 [2024-11-25 14:33:06.848683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.866 [2024-11-25 14:33:06.848696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.866 [2024-11-25 14:33:06.848703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.866 [2024-11-25 14:33:06.848709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.866 [2024-11-25 14:33:06.848723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.866 qpair failed and we were unable to recover it. 00:35:01.866 [2024-11-25 14:33:06.858711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.866 [2024-11-25 14:33:06.858761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.866 [2024-11-25 14:33:06.858774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.866 [2024-11-25 14:33:06.858781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.866 [2024-11-25 14:33:06.858787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.866 [2024-11-25 14:33:06.858801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.866 qpair failed and we were unable to recover it. 00:35:01.866 [2024-11-25 14:33:06.868760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.866 [2024-11-25 14:33:06.868848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.866 [2024-11-25 14:33:06.868864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.866 [2024-11-25 14:33:06.868871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.866 [2024-11-25 14:33:06.868878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.866 [2024-11-25 14:33:06.868891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.866 qpair failed and we were unable to recover it. 00:35:01.866 [2024-11-25 14:33:06.878757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.866 [2024-11-25 14:33:06.878803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.866 [2024-11-25 14:33:06.878817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.866 [2024-11-25 14:33:06.878824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.866 [2024-11-25 14:33:06.878830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.866 [2024-11-25 14:33:06.878844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.866 qpair failed and we were unable to recover it. 00:35:01.866 [2024-11-25 14:33:06.888755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.866 [2024-11-25 14:33:06.888804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.866 [2024-11-25 14:33:06.888817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.866 [2024-11-25 14:33:06.888824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.866 [2024-11-25 14:33:06.888831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.866 [2024-11-25 14:33:06.888844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.866 qpair failed and we were unable to recover it. 00:35:01.866 [2024-11-25 14:33:06.898849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.866 [2024-11-25 14:33:06.898938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.866 [2024-11-25 14:33:06.898952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.866 [2024-11-25 14:33:06.898959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.866 [2024-11-25 14:33:06.898966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.866 [2024-11-25 14:33:06.898980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.866 qpair failed and we were unable to recover it. 00:35:01.866 [2024-11-25 14:33:06.908872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.866 [2024-11-25 14:33:06.908973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.866 [2024-11-25 14:33:06.908998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.866 [2024-11-25 14:33:06.909007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.866 [2024-11-25 14:33:06.909014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.866 [2024-11-25 14:33:06.909038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.866 qpair failed and we were unable to recover it. 00:35:01.866 [2024-11-25 14:33:06.918868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.866 [2024-11-25 14:33:06.918917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.866 [2024-11-25 14:33:06.918933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.866 [2024-11-25 14:33:06.918941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.866 [2024-11-25 14:33:06.918947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.866 [2024-11-25 14:33:06.918962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.866 qpair failed and we were unable to recover it. 00:35:01.866 [2024-11-25 14:33:06.928874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.866 [2024-11-25 14:33:06.928973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.866 [2024-11-25 14:33:06.928986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.866 [2024-11-25 14:33:06.928993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.866 [2024-11-25 14:33:06.929000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.866 [2024-11-25 14:33:06.929014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.866 qpair failed and we were unable to recover it. 00:35:01.866 [2024-11-25 14:33:06.938953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.866 [2024-11-25 14:33:06.939008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.866 [2024-11-25 14:33:06.939022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.866 [2024-11-25 14:33:06.939029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.866 [2024-11-25 14:33:06.939035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.866 [2024-11-25 14:33:06.939049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.866 qpair failed and we were unable to recover it. 00:35:01.866 [2024-11-25 14:33:06.948823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:01.867 [2024-11-25 14:33:06.948872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:01.867 [2024-11-25 14:33:06.948887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:01.867 [2024-11-25 14:33:06.948894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:01.867 [2024-11-25 14:33:06.948901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:01.867 [2024-11-25 14:33:06.948915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:01.867 qpair failed and we were unable to recover it. 00:35:02.129 [2024-11-25 14:33:06.958983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.129 [2024-11-25 14:33:06.959029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.129 [2024-11-25 14:33:06.959043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.129 [2024-11-25 14:33:06.959051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.129 [2024-11-25 14:33:06.959058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.129 [2024-11-25 14:33:06.959072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.129 qpair failed and we were unable to recover it. 00:35:02.129 [2024-11-25 14:33:06.968961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.129 [2024-11-25 14:33:06.969017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.129 [2024-11-25 14:33:06.969030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.129 [2024-11-25 14:33:06.969038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.129 [2024-11-25 14:33:06.969044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.129 [2024-11-25 14:33:06.969058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.129 qpair failed and we were unable to recover it. 00:35:02.129 [2024-11-25 14:33:06.979058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.129 [2024-11-25 14:33:06.979110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.129 [2024-11-25 14:33:06.979123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.129 [2024-11-25 14:33:06.979130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.129 [2024-11-25 14:33:06.979137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.129 [2024-11-25 14:33:06.979150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.129 qpair failed and we were unable to recover it. 00:35:02.129 [2024-11-25 14:33:06.989079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.129 [2024-11-25 14:33:06.989123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.129 [2024-11-25 14:33:06.989137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.129 [2024-11-25 14:33:06.989144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.129 [2024-11-25 14:33:06.989151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.129 [2024-11-25 14:33:06.989169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.129 qpair failed and we were unable to recover it. 00:35:02.129 [2024-11-25 14:33:06.999062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.129 [2024-11-25 14:33:06.999113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.129 [2024-11-25 14:33:06.999129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.129 [2024-11-25 14:33:06.999137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.129 [2024-11-25 14:33:06.999143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.129 [2024-11-25 14:33:06.999160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.129 qpair failed and we were unable to recover it. 00:35:02.129 [2024-11-25 14:33:07.009087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.129 [2024-11-25 14:33:07.009133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.129 [2024-11-25 14:33:07.009146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.129 [2024-11-25 14:33:07.009153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.129 [2024-11-25 14:33:07.009163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.130 [2024-11-25 14:33:07.009177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.130 qpair failed and we were unable to recover it. 00:35:02.130 [2024-11-25 14:33:07.019167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.130 [2024-11-25 14:33:07.019221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.130 [2024-11-25 14:33:07.019234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.130 [2024-11-25 14:33:07.019241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.130 [2024-11-25 14:33:07.019248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.130 [2024-11-25 14:33:07.019262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.130 qpair failed and we were unable to recover it. 00:35:02.130 [2024-11-25 14:33:07.029174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.130 [2024-11-25 14:33:07.029260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.130 [2024-11-25 14:33:07.029273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.130 [2024-11-25 14:33:07.029280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.130 [2024-11-25 14:33:07.029286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.130 [2024-11-25 14:33:07.029300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.130 qpair failed and we were unable to recover it. 00:35:02.130 [2024-11-25 14:33:07.039192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.130 [2024-11-25 14:33:07.039245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.130 [2024-11-25 14:33:07.039258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.130 [2024-11-25 14:33:07.039265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.130 [2024-11-25 14:33:07.039272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.130 [2024-11-25 14:33:07.039289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.130 qpair failed and we were unable to recover it. 00:35:02.130 [2024-11-25 14:33:07.049163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.130 [2024-11-25 14:33:07.049260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.130 [2024-11-25 14:33:07.049273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.130 [2024-11-25 14:33:07.049280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.130 [2024-11-25 14:33:07.049286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.130 [2024-11-25 14:33:07.049300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.130 qpair failed and we were unable to recover it. 00:35:02.130 [2024-11-25 14:33:07.059251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.130 [2024-11-25 14:33:07.059302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.130 [2024-11-25 14:33:07.059315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.130 [2024-11-25 14:33:07.059322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.130 [2024-11-25 14:33:07.059328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.130 [2024-11-25 14:33:07.059343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.130 qpair failed and we were unable to recover it. 00:35:02.130 [2024-11-25 14:33:07.069263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.130 [2024-11-25 14:33:07.069317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.130 [2024-11-25 14:33:07.069331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.130 [2024-11-25 14:33:07.069338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.130 [2024-11-25 14:33:07.069344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.130 [2024-11-25 14:33:07.069358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.130 qpair failed and we were unable to recover it. 00:35:02.130 [2024-11-25 14:33:07.079345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.130 [2024-11-25 14:33:07.079433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.130 [2024-11-25 14:33:07.079446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.130 [2024-11-25 14:33:07.079453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.130 [2024-11-25 14:33:07.079460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.130 [2024-11-25 14:33:07.079473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.130 qpair failed and we were unable to recover it. 00:35:02.130 [2024-11-25 14:33:07.089289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.130 [2024-11-25 14:33:07.089341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.130 [2024-11-25 14:33:07.089356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.130 [2024-11-25 14:33:07.089364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.130 [2024-11-25 14:33:07.089370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.130 [2024-11-25 14:33:07.089386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.130 qpair failed and we were unable to recover it. 00:35:02.130 [2024-11-25 14:33:07.099405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.130 [2024-11-25 14:33:07.099454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.130 [2024-11-25 14:33:07.099468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.130 [2024-11-25 14:33:07.099475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.130 [2024-11-25 14:33:07.099481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.130 [2024-11-25 14:33:07.099495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.130 qpair failed and we were unable to recover it. 00:35:02.130 [2024-11-25 14:33:07.109412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.130 [2024-11-25 14:33:07.109457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.130 [2024-11-25 14:33:07.109470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.130 [2024-11-25 14:33:07.109477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.130 [2024-11-25 14:33:07.109484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.130 [2024-11-25 14:33:07.109497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.130 qpair failed and we were unable to recover it. 00:35:02.130 [2024-11-25 14:33:07.119380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.130 [2024-11-25 14:33:07.119474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.130 [2024-11-25 14:33:07.119488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.130 [2024-11-25 14:33:07.119496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.131 [2024-11-25 14:33:07.119502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.131 [2024-11-25 14:33:07.119516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.131 qpair failed and we were unable to recover it. 00:35:02.131 [2024-11-25 14:33:07.129446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.131 [2024-11-25 14:33:07.129497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.131 [2024-11-25 14:33:07.129514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.131 [2024-11-25 14:33:07.129521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.131 [2024-11-25 14:33:07.129527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.131 [2024-11-25 14:33:07.129541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.131 qpair failed and we were unable to recover it. 00:35:02.131 [2024-11-25 14:33:07.139511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.131 [2024-11-25 14:33:07.139568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.131 [2024-11-25 14:33:07.139581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.131 [2024-11-25 14:33:07.139588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.131 [2024-11-25 14:33:07.139594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.131 [2024-11-25 14:33:07.139608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.131 qpair failed and we were unable to recover it. 00:35:02.131 [2024-11-25 14:33:07.149504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.131 [2024-11-25 14:33:07.149553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.131 [2024-11-25 14:33:07.149567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.131 [2024-11-25 14:33:07.149574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.131 [2024-11-25 14:33:07.149580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.131 [2024-11-25 14:33:07.149594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.131 qpair failed and we were unable to recover it. 00:35:02.131 [2024-11-25 14:33:07.159526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.131 [2024-11-25 14:33:07.159608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.131 [2024-11-25 14:33:07.159620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.131 [2024-11-25 14:33:07.159627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.131 [2024-11-25 14:33:07.159634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.131 [2024-11-25 14:33:07.159647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.131 qpair failed and we were unable to recover it. 00:35:02.131 [2024-11-25 14:33:07.169547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.131 [2024-11-25 14:33:07.169593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.131 [2024-11-25 14:33:07.169606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.131 [2024-11-25 14:33:07.169613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.131 [2024-11-25 14:33:07.169619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.131 [2024-11-25 14:33:07.169636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.131 qpair failed and we were unable to recover it. 00:35:02.131 [2024-11-25 14:33:07.179629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.131 [2024-11-25 14:33:07.179680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.131 [2024-11-25 14:33:07.179693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.131 [2024-11-25 14:33:07.179700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.131 [2024-11-25 14:33:07.179706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.131 [2024-11-25 14:33:07.179719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.131 qpair failed and we were unable to recover it. 00:35:02.131 [2024-11-25 14:33:07.189632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.131 [2024-11-25 14:33:07.189688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.131 [2024-11-25 14:33:07.189702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.131 [2024-11-25 14:33:07.189709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.131 [2024-11-25 14:33:07.189715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.131 [2024-11-25 14:33:07.189729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.131 qpair failed and we were unable to recover it. 00:35:02.131 [2024-11-25 14:33:07.199633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.131 [2024-11-25 14:33:07.199694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.131 [2024-11-25 14:33:07.199708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.131 [2024-11-25 14:33:07.199715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.131 [2024-11-25 14:33:07.199722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.131 [2024-11-25 14:33:07.199735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.131 qpair failed and we were unable to recover it. 00:35:02.131 [2024-11-25 14:33:07.209657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.131 [2024-11-25 14:33:07.209749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.131 [2024-11-25 14:33:07.209763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.131 [2024-11-25 14:33:07.209770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.131 [2024-11-25 14:33:07.209776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.131 [2024-11-25 14:33:07.209790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.131 qpair failed and we were unable to recover it. 00:35:02.394 [2024-11-25 14:33:07.219599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.394 [2024-11-25 14:33:07.219657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.394 [2024-11-25 14:33:07.219672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.394 [2024-11-25 14:33:07.219680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.394 [2024-11-25 14:33:07.219686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.394 [2024-11-25 14:33:07.219700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-11-25 14:33:07.229745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.394 [2024-11-25 14:33:07.229807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.394 [2024-11-25 14:33:07.229821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.394 [2024-11-25 14:33:07.229828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.394 [2024-11-25 14:33:07.229834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.394 [2024-11-25 14:33:07.229848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-11-25 14:33:07.239721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.394 [2024-11-25 14:33:07.239774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.394 [2024-11-25 14:33:07.239787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.394 [2024-11-25 14:33:07.239794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.394 [2024-11-25 14:33:07.239801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.394 [2024-11-25 14:33:07.239814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-11-25 14:33:07.249750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.394 [2024-11-25 14:33:07.249795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.394 [2024-11-25 14:33:07.249809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.394 [2024-11-25 14:33:07.249816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.394 [2024-11-25 14:33:07.249822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.394 [2024-11-25 14:33:07.249836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-11-25 14:33:07.259825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.394 [2024-11-25 14:33:07.259877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.394 [2024-11-25 14:33:07.259894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.394 [2024-11-25 14:33:07.259901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.394 [2024-11-25 14:33:07.259907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.394 [2024-11-25 14:33:07.259921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-11-25 14:33:07.269842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.394 [2024-11-25 14:33:07.269929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.394 [2024-11-25 14:33:07.269955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.394 [2024-11-25 14:33:07.269964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.394 [2024-11-25 14:33:07.269972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.394 [2024-11-25 14:33:07.269991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-11-25 14:33:07.279829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.394 [2024-11-25 14:33:07.279903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.394 [2024-11-25 14:33:07.279929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.394 [2024-11-25 14:33:07.279939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.394 [2024-11-25 14:33:07.279946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.394 [2024-11-25 14:33:07.279966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-11-25 14:33:07.289834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.394 [2024-11-25 14:33:07.289935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.394 [2024-11-25 14:33:07.289961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.394 [2024-11-25 14:33:07.289970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.394 [2024-11-25 14:33:07.289977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.394 [2024-11-25 14:33:07.289996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.394 qpair failed and we were unable to recover it. 00:35:02.394 [2024-11-25 14:33:07.299953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.394 [2024-11-25 14:33:07.300009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.394 [2024-11-25 14:33:07.300026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.394 [2024-11-25 14:33:07.300033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.394 [2024-11-25 14:33:07.300040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.395 [2024-11-25 14:33:07.300060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-11-25 14:33:07.309947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.395 [2024-11-25 14:33:07.309999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.395 [2024-11-25 14:33:07.310013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.395 [2024-11-25 14:33:07.310020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.395 [2024-11-25 14:33:07.310027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.395 [2024-11-25 14:33:07.310041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-11-25 14:33:07.319901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.395 [2024-11-25 14:33:07.319982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.395 [2024-11-25 14:33:07.319995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.395 [2024-11-25 14:33:07.320003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.395 [2024-11-25 14:33:07.320009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.395 [2024-11-25 14:33:07.320023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-11-25 14:33:07.329976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.395 [2024-11-25 14:33:07.330020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.395 [2024-11-25 14:33:07.330034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.395 [2024-11-25 14:33:07.330041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.395 [2024-11-25 14:33:07.330048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.395 [2024-11-25 14:33:07.330062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-11-25 14:33:07.340070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.395 [2024-11-25 14:33:07.340118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.395 [2024-11-25 14:33:07.340131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.395 [2024-11-25 14:33:07.340139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.395 [2024-11-25 14:33:07.340145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.395 [2024-11-25 14:33:07.340163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-11-25 14:33:07.350048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.395 [2024-11-25 14:33:07.350093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.395 [2024-11-25 14:33:07.350107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.395 [2024-11-25 14:33:07.350114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.395 [2024-11-25 14:33:07.350120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.395 [2024-11-25 14:33:07.350135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-11-25 14:33:07.359947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.395 [2024-11-25 14:33:07.360030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.395 [2024-11-25 14:33:07.360043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.395 [2024-11-25 14:33:07.360051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.395 [2024-11-25 14:33:07.360058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.395 [2024-11-25 14:33:07.360072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-11-25 14:33:07.369972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.395 [2024-11-25 14:33:07.370025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.395 [2024-11-25 14:33:07.370039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.395 [2024-11-25 14:33:07.370046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.395 [2024-11-25 14:33:07.370053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.395 [2024-11-25 14:33:07.370067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-11-25 14:33:07.380162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.395 [2024-11-25 14:33:07.380215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.395 [2024-11-25 14:33:07.380228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.395 [2024-11-25 14:33:07.380236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.395 [2024-11-25 14:33:07.380242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.395 [2024-11-25 14:33:07.380256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-11-25 14:33:07.390123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.395 [2024-11-25 14:33:07.390174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.395 [2024-11-25 14:33:07.390192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.395 [2024-11-25 14:33:07.390200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.395 [2024-11-25 14:33:07.390206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.395 [2024-11-25 14:33:07.390221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.395 [2024-11-25 14:33:07.400154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.395 [2024-11-25 14:33:07.400207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.395 [2024-11-25 14:33:07.400220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.395 [2024-11-25 14:33:07.400228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.395 [2024-11-25 14:33:07.400234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.395 [2024-11-25 14:33:07.400249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.395 qpair failed and we were unable to recover it. 00:35:02.396 [2024-11-25 14:33:07.410170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.396 [2024-11-25 14:33:07.410248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.396 [2024-11-25 14:33:07.410261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.396 [2024-11-25 14:33:07.410269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.396 [2024-11-25 14:33:07.410276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.396 [2024-11-25 14:33:07.410291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.396 [2024-11-25 14:33:07.420259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.396 [2024-11-25 14:33:07.420311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.396 [2024-11-25 14:33:07.420324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.396 [2024-11-25 14:33:07.420331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.396 [2024-11-25 14:33:07.420338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.396 [2024-11-25 14:33:07.420351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.396 [2024-11-25 14:33:07.430302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.396 [2024-11-25 14:33:07.430353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.396 [2024-11-25 14:33:07.430366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.396 [2024-11-25 14:33:07.430374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.396 [2024-11-25 14:33:07.430380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.396 [2024-11-25 14:33:07.430397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.396 [2024-11-25 14:33:07.440276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.396 [2024-11-25 14:33:07.440341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.396 [2024-11-25 14:33:07.440355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.396 [2024-11-25 14:33:07.440362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.396 [2024-11-25 14:33:07.440369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.396 [2024-11-25 14:33:07.440383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.396 [2024-11-25 14:33:07.450258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.396 [2024-11-25 14:33:07.450305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.396 [2024-11-25 14:33:07.450318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.396 [2024-11-25 14:33:07.450325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.396 [2024-11-25 14:33:07.450332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.396 [2024-11-25 14:33:07.450345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.396 [2024-11-25 14:33:07.460352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.396 [2024-11-25 14:33:07.460406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.396 [2024-11-25 14:33:07.460419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.396 [2024-11-25 14:33:07.460426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.396 [2024-11-25 14:33:07.460433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.396 [2024-11-25 14:33:07.460447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.396 [2024-11-25 14:33:07.470397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.396 [2024-11-25 14:33:07.470447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.396 [2024-11-25 14:33:07.470460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.396 [2024-11-25 14:33:07.470468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.396 [2024-11-25 14:33:07.470474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.396 [2024-11-25 14:33:07.470488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.396 [2024-11-25 14:33:07.480370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.396 [2024-11-25 14:33:07.480424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.396 [2024-11-25 14:33:07.480437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.396 [2024-11-25 14:33:07.480445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.396 [2024-11-25 14:33:07.480451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.396 [2024-11-25 14:33:07.480465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.396 qpair failed and we were unable to recover it. 00:35:02.657 [2024-11-25 14:33:07.490410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.657 [2024-11-25 14:33:07.490458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.657 [2024-11-25 14:33:07.490472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.657 [2024-11-25 14:33:07.490479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.657 [2024-11-25 14:33:07.490486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.657 [2024-11-25 14:33:07.490500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.657 qpair failed and we were unable to recover it. 00:35:02.657 [2024-11-25 14:33:07.500486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.657 [2024-11-25 14:33:07.500586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.657 [2024-11-25 14:33:07.500600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.657 [2024-11-25 14:33:07.500608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.657 [2024-11-25 14:33:07.500614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.657 [2024-11-25 14:33:07.500628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.657 qpair failed and we were unable to recover it. 00:35:02.657 [2024-11-25 14:33:07.510514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.657 [2024-11-25 14:33:07.510564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.657 [2024-11-25 14:33:07.510578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.657 [2024-11-25 14:33:07.510585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.657 [2024-11-25 14:33:07.510592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.657 [2024-11-25 14:33:07.510606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.657 qpair failed and we were unable to recover it. 00:35:02.657 [2024-11-25 14:33:07.520472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.657 [2024-11-25 14:33:07.520527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.657 [2024-11-25 14:33:07.520544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.657 [2024-11-25 14:33:07.520551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.658 [2024-11-25 14:33:07.520558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.658 [2024-11-25 14:33:07.520572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.658 qpair failed and we were unable to recover it. 00:35:02.658 [2024-11-25 14:33:07.530515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.658 [2024-11-25 14:33:07.530605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.658 [2024-11-25 14:33:07.530619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.658 [2024-11-25 14:33:07.530626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.658 [2024-11-25 14:33:07.530632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.658 [2024-11-25 14:33:07.530646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.658 qpair failed and we were unable to recover it. 00:35:02.658 [2024-11-25 14:33:07.540457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.658 [2024-11-25 14:33:07.540506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.658 [2024-11-25 14:33:07.540519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.658 [2024-11-25 14:33:07.540527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.658 [2024-11-25 14:33:07.540533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.658 [2024-11-25 14:33:07.540547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.658 qpair failed and we were unable to recover it. 00:35:02.658 [2024-11-25 14:33:07.550603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.658 [2024-11-25 14:33:07.550707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.658 [2024-11-25 14:33:07.550721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.658 [2024-11-25 14:33:07.550729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.658 [2024-11-25 14:33:07.550735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.658 [2024-11-25 14:33:07.550749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.658 qpair failed and we were unable to recover it. 00:35:02.658 [2024-11-25 14:33:07.560456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.658 [2024-11-25 14:33:07.560501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.658 [2024-11-25 14:33:07.560516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.658 [2024-11-25 14:33:07.560524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.658 [2024-11-25 14:33:07.560531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.658 [2024-11-25 14:33:07.560549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.658 qpair failed and we were unable to recover it. 00:35:02.658 [2024-11-25 14:33:07.570598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.658 [2024-11-25 14:33:07.570643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.658 [2024-11-25 14:33:07.570656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.658 [2024-11-25 14:33:07.570664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.658 [2024-11-25 14:33:07.570670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.658 [2024-11-25 14:33:07.570684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.658 qpair failed and we were unable to recover it. 00:35:02.658 [2024-11-25 14:33:07.580571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.658 [2024-11-25 14:33:07.580629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.658 [2024-11-25 14:33:07.580642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.658 [2024-11-25 14:33:07.580650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.658 [2024-11-25 14:33:07.580656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.658 [2024-11-25 14:33:07.580670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.658 qpair failed and we were unable to recover it. 00:35:02.658 [2024-11-25 14:33:07.590702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.658 [2024-11-25 14:33:07.590752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.658 [2024-11-25 14:33:07.590767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.658 [2024-11-25 14:33:07.590774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.658 [2024-11-25 14:33:07.590781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.658 [2024-11-25 14:33:07.590795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.658 qpair failed and we were unable to recover it. 00:35:02.658 [2024-11-25 14:33:07.600688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.658 [2024-11-25 14:33:07.600731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.658 [2024-11-25 14:33:07.600744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.658 [2024-11-25 14:33:07.600752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.658 [2024-11-25 14:33:07.600758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.658 [2024-11-25 14:33:07.600772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.658 qpair failed and we were unable to recover it. 00:35:02.658 [2024-11-25 14:33:07.610691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.658 [2024-11-25 14:33:07.610740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.658 [2024-11-25 14:33:07.610754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.658 [2024-11-25 14:33:07.610762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.658 [2024-11-25 14:33:07.610768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.658 [2024-11-25 14:33:07.610782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.658 qpair failed and we were unable to recover it. 00:35:02.658 [2024-11-25 14:33:07.620806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.658 [2024-11-25 14:33:07.620853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.658 [2024-11-25 14:33:07.620866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.658 [2024-11-25 14:33:07.620873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.658 [2024-11-25 14:33:07.620879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.658 [2024-11-25 14:33:07.620893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.658 qpair failed and we were unable to recover it. 00:35:02.658 [2024-11-25 14:33:07.630786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.658 [2024-11-25 14:33:07.630834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.658 [2024-11-25 14:33:07.630848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.658 [2024-11-25 14:33:07.630855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.658 [2024-11-25 14:33:07.630862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.658 [2024-11-25 14:33:07.630876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.658 qpair failed and we were unable to recover it. 00:35:02.658 [2024-11-25 14:33:07.640793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.658 [2024-11-25 14:33:07.640838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.658 [2024-11-25 14:33:07.640851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.658 [2024-11-25 14:33:07.640859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.658 [2024-11-25 14:33:07.640865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.658 [2024-11-25 14:33:07.640879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.658 qpair failed and we were unable to recover it. 00:35:02.658 [2024-11-25 14:33:07.650707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.658 [2024-11-25 14:33:07.650754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.658 [2024-11-25 14:33:07.650768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.658 [2024-11-25 14:33:07.650779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.658 [2024-11-25 14:33:07.650786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.659 [2024-11-25 14:33:07.650800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.659 qpair failed and we were unable to recover it. 00:35:02.659 [2024-11-25 14:33:07.660895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.659 [2024-11-25 14:33:07.661000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.659 [2024-11-25 14:33:07.661014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.659 [2024-11-25 14:33:07.661022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.659 [2024-11-25 14:33:07.661029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.659 [2024-11-25 14:33:07.661042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.659 qpair failed and we were unable to recover it. 00:35:02.659 [2024-11-25 14:33:07.670925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.659 [2024-11-25 14:33:07.670979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.659 [2024-11-25 14:33:07.671004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.659 [2024-11-25 14:33:07.671014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.659 [2024-11-25 14:33:07.671021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.659 [2024-11-25 14:33:07.671041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.659 qpair failed and we were unable to recover it. 00:35:02.659 [2024-11-25 14:33:07.680908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.659 [2024-11-25 14:33:07.680959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.659 [2024-11-25 14:33:07.680975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.659 [2024-11-25 14:33:07.680983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.659 [2024-11-25 14:33:07.680990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.659 [2024-11-25 14:33:07.681005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.659 qpair failed and we were unable to recover it. 00:35:02.659 [2024-11-25 14:33:07.690921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.659 [2024-11-25 14:33:07.690969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.659 [2024-11-25 14:33:07.690983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.659 [2024-11-25 14:33:07.690990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.659 [2024-11-25 14:33:07.690997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.659 [2024-11-25 14:33:07.691016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.659 qpair failed and we were unable to recover it. 00:35:02.659 [2024-11-25 14:33:07.700978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.659 [2024-11-25 14:33:07.701029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.659 [2024-11-25 14:33:07.701042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.659 [2024-11-25 14:33:07.701050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.659 [2024-11-25 14:33:07.701056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.659 [2024-11-25 14:33:07.701070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.659 qpair failed and we were unable to recover it. 00:35:02.659 [2024-11-25 14:33:07.711023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.659 [2024-11-25 14:33:07.711072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.659 [2024-11-25 14:33:07.711085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.659 [2024-11-25 14:33:07.711092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.659 [2024-11-25 14:33:07.711099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.659 [2024-11-25 14:33:07.711113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.659 qpair failed and we were unable to recover it. 00:35:02.659 [2024-11-25 14:33:07.721005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.659 [2024-11-25 14:33:07.721048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.659 [2024-11-25 14:33:07.721062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.659 [2024-11-25 14:33:07.721069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.659 [2024-11-25 14:33:07.721076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.659 [2024-11-25 14:33:07.721090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.659 qpair failed and we were unable to recover it. 00:35:02.659 [2024-11-25 14:33:07.731032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.659 [2024-11-25 14:33:07.731079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.659 [2024-11-25 14:33:07.731092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.659 [2024-11-25 14:33:07.731099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.659 [2024-11-25 14:33:07.731106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.659 [2024-11-25 14:33:07.731120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.659 qpair failed and we were unable to recover it. 00:35:02.659 [2024-11-25 14:33:07.741121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.659 [2024-11-25 14:33:07.741176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.659 [2024-11-25 14:33:07.741190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.659 [2024-11-25 14:33:07.741197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.659 [2024-11-25 14:33:07.741204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.659 [2024-11-25 14:33:07.741217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.659 qpair failed and we were unable to recover it. 00:35:02.920 [2024-11-25 14:33:07.751134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.920 [2024-11-25 14:33:07.751187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.920 [2024-11-25 14:33:07.751200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.920 [2024-11-25 14:33:07.751207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.920 [2024-11-25 14:33:07.751213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.920 [2024-11-25 14:33:07.751227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.920 qpair failed and we were unable to recover it. 00:35:02.920 [2024-11-25 14:33:07.761112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.920 [2024-11-25 14:33:07.761157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.920 [2024-11-25 14:33:07.761174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.920 [2024-11-25 14:33:07.761181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.920 [2024-11-25 14:33:07.761188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.920 [2024-11-25 14:33:07.761202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.920 qpair failed and we were unable to recover it. 00:35:02.920 [2024-11-25 14:33:07.771116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.920 [2024-11-25 14:33:07.771163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.920 [2024-11-25 14:33:07.771176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.920 [2024-11-25 14:33:07.771184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.920 [2024-11-25 14:33:07.771190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.920 [2024-11-25 14:33:07.771204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.920 qpair failed and we were unable to recover it. 00:35:02.920 [2024-11-25 14:33:07.781226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.920 [2024-11-25 14:33:07.781279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.920 [2024-11-25 14:33:07.781292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.920 [2024-11-25 14:33:07.781303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.920 [2024-11-25 14:33:07.781309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.920 [2024-11-25 14:33:07.781324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.920 qpair failed and we were unable to recover it. 00:35:02.920 [2024-11-25 14:33:07.791258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.920 [2024-11-25 14:33:07.791342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.920 [2024-11-25 14:33:07.791356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.920 [2024-11-25 14:33:07.791364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.920 [2024-11-25 14:33:07.791370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.920 [2024-11-25 14:33:07.791384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.920 qpair failed and we were unable to recover it. 00:35:02.920 [2024-11-25 14:33:07.801093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.920 [2024-11-25 14:33:07.801136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.920 [2024-11-25 14:33:07.801151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.920 [2024-11-25 14:33:07.801164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.920 [2024-11-25 14:33:07.801171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.920 [2024-11-25 14:33:07.801187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.920 qpair failed and we were unable to recover it. 00:35:02.920 [2024-11-25 14:33:07.811133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.920 [2024-11-25 14:33:07.811209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.920 [2024-11-25 14:33:07.811223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.920 [2024-11-25 14:33:07.811231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.920 [2024-11-25 14:33:07.811237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.920 [2024-11-25 14:33:07.811253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.920 qpair failed and we were unable to recover it. 00:35:02.920 [2024-11-25 14:33:07.821300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.920 [2024-11-25 14:33:07.821392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.920 [2024-11-25 14:33:07.821406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.920 [2024-11-25 14:33:07.821413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.920 [2024-11-25 14:33:07.821421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.920 [2024-11-25 14:33:07.821438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.920 qpair failed and we were unable to recover it. 00:35:02.920 [2024-11-25 14:33:07.831330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.920 [2024-11-25 14:33:07.831382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.920 [2024-11-25 14:33:07.831395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.920 [2024-11-25 14:33:07.831402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.920 [2024-11-25 14:33:07.831409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae20c0 00:35:02.920 [2024-11-25 14:33:07.831422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:35:02.920 qpair failed and we were unable to recover it. 00:35:02.920 [2024-11-25 14:33:07.841330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.920 [2024-11-25 14:33:07.841445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.920 [2024-11-25 14:33:07.841510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.920 [2024-11-25 14:33:07.841535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.920 [2024-11-25 14:33:07.841557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f70a0000b90 00:35:02.921 [2024-11-25 14:33:07.841612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:02.921 qpair failed and we were unable to recover it. 00:35:02.921 [2024-11-25 14:33:07.851358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:35:02.921 [2024-11-25 14:33:07.851425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:35:02.921 [2024-11-25 14:33:07.851455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:35:02.921 [2024-11-25 14:33:07.851471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:35:02.921 [2024-11-25 14:33:07.851486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f70a0000b90 00:35:02.921 [2024-11-25 14:33:07.851517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:35:02.921 qpair failed and we were unable to recover it. 00:35:02.921 [2024-11-25 14:33:07.851665] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:35:02.921 A controller has encountered a failure and is being reset. 00:35:02.921 [2024-11-25 14:33:07.851781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad7e00 (9): Bad file descriptor 00:35:02.921 Controller properly reset. 00:35:03.181 Initializing NVMe Controllers 00:35:03.181 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:03.181 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:03.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:35:03.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:35:03.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:35:03.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:35:03.181 Initialization complete. Launching workers. 00:35:03.181 Starting thread on core 1 00:35:03.181 Starting thread on core 2 00:35:03.181 Starting thread on core 3 00:35:03.181 Starting thread on core 0 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:35:03.181 00:35:03.181 real 0m11.567s 00:35:03.181 user 0m21.788s 00:35:03.181 sys 0m4.195s 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:35:03.181 ************************************ 00:35:03.181 END TEST nvmf_target_disconnect_tc2 00:35:03.181 ************************************ 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:03.181 rmmod nvme_tcp 00:35:03.181 rmmod nvme_fabrics 00:35:03.181 rmmod nvme_keyring 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3625342 ']' 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3625342 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3625342 ']' 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3625342 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3625342 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3625342' 00:35:03.181 killing process with pid 3625342 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3625342 00:35:03.181 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3625342 00:35:03.442 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:03.442 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:03.442 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:03.443 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:35:03.443 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:03.443 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:35:03.443 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:35:03.443 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:03.443 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:03.443 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:03.443 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:03.443 14:33:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:05.377 14:33:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:05.377 00:35:05.377 real 0m22.038s 00:35:05.377 user 0m50.034s 00:35:05.377 sys 0m10.529s 00:35:05.377 14:33:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:05.377 14:33:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:35:05.377 ************************************ 00:35:05.377 END TEST nvmf_target_disconnect 00:35:05.377 ************************************ 00:35:05.377 14:33:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:05.377 00:35:05.377 real 6m32.178s 00:35:05.377 user 11m29.571s 00:35:05.377 sys 2m15.994s 00:35:05.377 14:33:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:05.377 14:33:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.377 ************************************ 00:35:05.377 END TEST nvmf_host 00:35:05.377 ************************************ 00:35:05.638 14:33:10 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:35:05.638 14:33:10 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:35:05.638 14:33:10 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:35:05.638 14:33:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:05.638 14:33:10 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:05.638 14:33:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:05.638 ************************************ 00:35:05.638 START TEST nvmf_target_core_interrupt_mode 00:35:05.638 ************************************ 00:35:05.638 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:35:05.638 * Looking for test storage... 00:35:05.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:35:05.638 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:05.638 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:35:05.638 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:05.638 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:05.638 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:05.638 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:05.638 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:05.638 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:35:05.638 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:35:05.638 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:35:05.638 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:35:05.638 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:35:05.638 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:35:05.638 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:35:05.638 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:05.638 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:35:05.638 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:35:05.638 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:05.638 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:05.638 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:35:05.638 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:35:05.638 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:05.899 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:35:05.899 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:35:05.899 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:35:05.899 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:35:05.899 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:05.899 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:35:05.899 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:35:05.899 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:05.899 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:05.899 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:35:05.899 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:05.899 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:05.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.899 --rc genhtml_branch_coverage=1 00:35:05.899 --rc genhtml_function_coverage=1 00:35:05.899 --rc genhtml_legend=1 00:35:05.899 --rc geninfo_all_blocks=1 00:35:05.899 --rc geninfo_unexecuted_blocks=1 00:35:05.899 00:35:05.899 ' 00:35:05.899 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:05.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.899 --rc genhtml_branch_coverage=1 00:35:05.899 --rc genhtml_function_coverage=1 00:35:05.899 --rc genhtml_legend=1 00:35:05.899 --rc geninfo_all_blocks=1 00:35:05.899 --rc geninfo_unexecuted_blocks=1 00:35:05.899 00:35:05.899 ' 00:35:05.899 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:05.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.899 --rc genhtml_branch_coverage=1 00:35:05.899 --rc genhtml_function_coverage=1 00:35:05.899 --rc genhtml_legend=1 00:35:05.899 --rc geninfo_all_blocks=1 00:35:05.899 --rc geninfo_unexecuted_blocks=1 00:35:05.899 00:35:05.899 ' 00:35:05.899 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:05.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.899 --rc genhtml_branch_coverage=1 00:35:05.899 --rc genhtml_function_coverage=1 00:35:05.899 --rc genhtml_legend=1 00:35:05.899 --rc geninfo_all_blocks=1 00:35:05.899 --rc geninfo_unexecuted_blocks=1 00:35:05.899 00:35:05.899 ' 00:35:05.899 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:35:05.899 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:35:05.899 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:05.899 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:35:05.899 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:05.899 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:05.900 ************************************ 00:35:05.900 START TEST nvmf_abort 00:35:05.900 ************************************ 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:35:05.900 * Looking for test storage... 00:35:05.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:35:05.900 14:33:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:06.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.163 --rc genhtml_branch_coverage=1 00:35:06.163 --rc genhtml_function_coverage=1 00:35:06.163 --rc genhtml_legend=1 00:35:06.163 --rc geninfo_all_blocks=1 00:35:06.163 --rc geninfo_unexecuted_blocks=1 00:35:06.163 00:35:06.163 ' 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:06.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.163 --rc genhtml_branch_coverage=1 00:35:06.163 --rc genhtml_function_coverage=1 00:35:06.163 --rc genhtml_legend=1 00:35:06.163 --rc geninfo_all_blocks=1 00:35:06.163 --rc geninfo_unexecuted_blocks=1 00:35:06.163 00:35:06.163 ' 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:06.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.163 --rc genhtml_branch_coverage=1 00:35:06.163 --rc genhtml_function_coverage=1 00:35:06.163 --rc genhtml_legend=1 00:35:06.163 --rc geninfo_all_blocks=1 00:35:06.163 --rc geninfo_unexecuted_blocks=1 00:35:06.163 00:35:06.163 ' 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:06.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.163 --rc genhtml_branch_coverage=1 00:35:06.163 --rc genhtml_function_coverage=1 00:35:06.163 --rc genhtml_legend=1 00:35:06.163 --rc geninfo_all_blocks=1 00:35:06.163 --rc geninfo_unexecuted_blocks=1 00:35:06.163 00:35:06.163 ' 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:06.163 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:35:06.164 14:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:14.321 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:14.321 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:35:14.321 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:14.322 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:14.322 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:14.322 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:14.322 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:14.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:14.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:35:14.322 00:35:14.322 --- 10.0.0.2 ping statistics --- 00:35:14.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:14.322 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:35:14.322 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:14.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:14.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:35:14.322 00:35:14.322 --- 10.0.0.1 ping statistics --- 00:35:14.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:14.322 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3631498 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3631498 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3631498 ']' 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:14.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:14.323 [2024-11-25 14:33:18.613359] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:14.323 [2024-11-25 14:33:18.614864] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:35:14.323 [2024-11-25 14:33:18.614939] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:14.323 [2024-11-25 14:33:18.693614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:14.323 [2024-11-25 14:33:18.739783] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:14.323 [2024-11-25 14:33:18.739828] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:14.323 [2024-11-25 14:33:18.739836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:14.323 [2024-11-25 14:33:18.739843] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:14.323 [2024-11-25 14:33:18.739851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:14.323 [2024-11-25 14:33:18.743184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:14.323 [2024-11-25 14:33:18.743348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:14.323 [2024-11-25 14:33:18.743405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:14.323 [2024-11-25 14:33:18.814447] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:14.323 [2024-11-25 14:33:18.815328] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:14.323 [2024-11-25 14:33:18.815754] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:14.323 [2024-11-25 14:33:18.815969] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:14.323 [2024-11-25 14:33:18.904409] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:14.323 Malloc0 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:14.323 Delay0 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.323 14:33:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:14.323 14:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.323 14:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:14.323 [2024-11-25 14:33:19.008358] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:14.323 14:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.323 14:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:14.323 14:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.323 14:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:14.323 14:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.323 14:33:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:35:14.323 [2024-11-25 14:33:19.152827] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:35:16.272 Initializing NVMe Controllers 00:35:16.272 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:35:16.272 controller IO queue size 128 less than required 00:35:16.272 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:35:16.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:35:16.272 Initialization complete. Launching workers. 00:35:16.272 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28408 00:35:16.272 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28465, failed to submit 66 00:35:16.272 success 28408, unsuccessful 57, failed 0 00:35:16.272 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:16.272 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.272 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:16.272 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.272 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:35:16.272 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:35:16.272 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:16.272 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:35:16.272 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:16.272 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:35:16.272 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:16.272 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:16.272 rmmod nvme_tcp 00:35:16.272 rmmod nvme_fabrics 00:35:16.272 rmmod nvme_keyring 00:35:16.272 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:16.272 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:35:16.272 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:35:16.272 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3631498 ']' 00:35:16.272 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3631498 00:35:16.272 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3631498 ']' 00:35:16.272 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3631498 00:35:16.272 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:35:16.272 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:16.272 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3631498 00:35:16.533 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:16.533 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:16.533 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3631498' 00:35:16.533 killing process with pid 3631498 00:35:16.533 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3631498 00:35:16.533 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3631498 00:35:16.533 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:16.533 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:16.534 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:16.534 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:35:16.534 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:35:16.534 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:16.534 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:35:16.534 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:16.534 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:16.534 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:16.534 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:16.534 14:33:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:19.080 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:19.080 00:35:19.080 real 0m12.805s 00:35:19.080 user 0m10.820s 00:35:19.080 sys 0m6.834s 00:35:19.080 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:19.080 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:19.080 ************************************ 00:35:19.080 END TEST nvmf_abort 00:35:19.080 ************************************ 00:35:19.080 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:35:19.080 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:19.080 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:19.080 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:19.080 ************************************ 00:35:19.080 START TEST nvmf_ns_hotplug_stress 00:35:19.080 ************************************ 00:35:19.080 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:35:19.080 * Looking for test storage... 00:35:19.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:19.080 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:19.080 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:35:19.080 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:19.080 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:19.080 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:19.080 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:19.080 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:19.080 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:35:19.080 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:35:19.080 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:35:19.080 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:35:19.080 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:35:19.080 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:35:19.080 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:35:19.080 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:19.080 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:35:19.080 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:19.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.081 --rc genhtml_branch_coverage=1 00:35:19.081 --rc genhtml_function_coverage=1 00:35:19.081 --rc genhtml_legend=1 00:35:19.081 --rc geninfo_all_blocks=1 00:35:19.081 --rc geninfo_unexecuted_blocks=1 00:35:19.081 00:35:19.081 ' 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:19.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.081 --rc genhtml_branch_coverage=1 00:35:19.081 --rc genhtml_function_coverage=1 00:35:19.081 --rc genhtml_legend=1 00:35:19.081 --rc geninfo_all_blocks=1 00:35:19.081 --rc geninfo_unexecuted_blocks=1 00:35:19.081 00:35:19.081 ' 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:19.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.081 --rc genhtml_branch_coverage=1 00:35:19.081 --rc genhtml_function_coverage=1 00:35:19.081 --rc genhtml_legend=1 00:35:19.081 --rc geninfo_all_blocks=1 00:35:19.081 --rc geninfo_unexecuted_blocks=1 00:35:19.081 00:35:19.081 ' 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:19.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.081 --rc genhtml_branch_coverage=1 00:35:19.081 --rc genhtml_function_coverage=1 00:35:19.081 --rc genhtml_legend=1 00:35:19.081 --rc geninfo_all_blocks=1 00:35:19.081 --rc geninfo_unexecuted_blocks=1 00:35:19.081 00:35:19.081 ' 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.081 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:35:19.082 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:19.082 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:19.082 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:19.082 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:19.082 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:19.082 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:19.082 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:19.082 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:19.082 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:19.082 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:19.082 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:19.082 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:35:19.082 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:19.082 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:19.082 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:19.082 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:19.082 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:19.082 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.082 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:19.082 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:19.082 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:19.082 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:19.082 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:35:19.082 14:33:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:27.222 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:27.222 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:27.222 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:27.222 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:27.223 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:27.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:27.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:35:27.223 00:35:27.223 --- 10.0.0.2 ping statistics --- 00:35:27.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:27.223 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:27.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:27.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:35:27.223 00:35:27.223 --- 10.0.0.1 ping statistics --- 00:35:27.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:27.223 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3636182 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3636182 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3636182 ']' 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:27.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:27.223 14:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:27.223 [2024-11-25 14:33:31.550970] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:27.223 [2024-11-25 14:33:31.552108] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:35:27.223 [2024-11-25 14:33:31.552180] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:27.223 [2024-11-25 14:33:31.653777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:27.223 [2024-11-25 14:33:31.704424] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:27.223 [2024-11-25 14:33:31.704480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:27.223 [2024-11-25 14:33:31.704492] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:27.223 [2024-11-25 14:33:31.704502] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:27.223 [2024-11-25 14:33:31.704510] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:27.223 [2024-11-25 14:33:31.706624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:27.223 [2024-11-25 14:33:31.706754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:27.223 [2024-11-25 14:33:31.706755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:27.223 [2024-11-25 14:33:31.784433] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:27.223 [2024-11-25 14:33:31.785608] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:27.223 [2024-11-25 14:33:31.786166] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:27.223 [2024-11-25 14:33:31.786291] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:27.484 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:27.485 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:35:27.485 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:27.485 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:27.485 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:27.485 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:27.485 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:35:27.485 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:27.745 [2024-11-25 14:33:32.595784] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:27.746 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:27.746 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:28.006 [2024-11-25 14:33:32.960570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:28.006 14:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:28.268 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:35:28.268 Malloc0 00:35:28.528 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:28.528 Delay0 00:35:28.528 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:28.788 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:35:29.049 NULL1 00:35:29.049 14:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:35:29.049 14:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3636549 00:35:29.049 14:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:29.049 14:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:35:29.049 14:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:29.309 14:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:29.570 14:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:35:29.570 14:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:35:29.831 true 00:35:29.831 14:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:29.831 14:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:29.831 14:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:30.091 14:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:35:30.091 14:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:35:30.354 true 00:35:30.354 14:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:30.354 14:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:30.615 14:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:30.875 14:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:35:30.875 14:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:35:30.875 true 00:35:30.875 14:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:30.875 14:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:31.136 14:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:31.397 14:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:35:31.397 14:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:35:31.657 true 00:35:31.658 14:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:31.658 14:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:31.658 14:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:31.918 14:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:35:31.918 14:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:35:32.179 true 00:35:32.179 14:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:32.179 14:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:32.179 14:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:32.441 14:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:35:32.441 14:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:35:32.702 true 00:35:32.702 14:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:32.702 14:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:32.963 14:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:32.963 14:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:35:32.963 14:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:35:33.224 true 00:35:33.224 14:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:33.224 14:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:33.485 14:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:33.485 14:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:35:33.485 14:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:35:33.745 true 00:35:33.745 14:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:33.746 14:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:34.007 14:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:34.266 14:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:35:34.266 14:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:35:34.266 true 00:35:34.266 14:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:34.266 14:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:34.526 14:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:34.785 14:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:35:34.785 14:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:35:34.785 true 00:35:34.785 14:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:34.785 14:33:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:35.045 14:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:35.305 14:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:35:35.305 14:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:35:35.305 true 00:35:35.577 14:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:35.577 14:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:35.577 14:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:35.837 14:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:35:35.837 14:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:35:36.098 true 00:35:36.098 14:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:36.098 14:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:36.098 14:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:36.359 14:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:35:36.359 14:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:35:36.619 true 00:35:36.619 14:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:36.619 14:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:36.881 14:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:36.881 14:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:35:36.881 14:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:35:37.141 true 00:35:37.141 14:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:37.141 14:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:37.401 14:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:37.401 14:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:35:37.401 14:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:35:37.661 true 00:35:37.661 14:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:37.661 14:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:37.921 14:33:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:38.224 14:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:35:38.224 14:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:35:38.224 true 00:35:38.224 14:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:38.224 14:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:38.484 14:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:38.484 14:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:35:38.484 14:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:35:38.745 true 00:35:38.745 14:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:38.745 14:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:39.005 14:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:39.265 14:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:35:39.265 14:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:35:39.265 true 00:35:39.265 14:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:39.265 14:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:39.525 14:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:39.785 14:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:35:39.785 14:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:35:39.785 true 00:35:40.045 14:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:40.045 14:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:40.045 14:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:40.305 14:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:35:40.305 14:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:35:40.565 true 00:35:40.565 14:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:40.565 14:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:40.565 14:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:40.823 14:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:35:40.823 14:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:35:41.083 true 00:35:41.083 14:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:41.083 14:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:41.342 14:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:41.342 14:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:35:41.342 14:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:35:41.603 true 00:35:41.603 14:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:41.603 14:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:41.866 14:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:41.866 14:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:35:41.866 14:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:35:42.127 true 00:35:42.127 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:42.127 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:42.388 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:42.649 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:35:42.649 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:35:42.649 true 00:35:42.649 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:42.649 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:42.910 14:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:43.171 14:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:35:43.171 14:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:35:43.171 true 00:35:43.171 14:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:43.172 14:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:43.433 14:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:43.695 14:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:35:43.695 14:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:35:43.695 true 00:35:43.695 14:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:43.695 14:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:43.955 14:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:44.216 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:35:44.216 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:35:44.216 true 00:35:44.216 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:44.216 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:44.476 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:44.736 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:35:44.736 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:35:44.995 true 00:35:44.995 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:44.995 14:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:44.995 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:45.254 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:35:45.254 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:35:45.515 true 00:35:45.515 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:45.515 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:45.775 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:45.775 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:35:45.775 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:35:46.036 true 00:35:46.036 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:46.036 14:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:46.297 14:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:46.297 14:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:35:46.297 14:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:35:46.558 true 00:35:46.558 14:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:46.558 14:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:46.818 14:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:47.080 14:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:35:47.080 14:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:35:47.080 true 00:35:47.080 14:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:47.080 14:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:47.341 14:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:47.602 14:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:35:47.602 14:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:35:47.602 true 00:35:47.602 14:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:47.602 14:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:47.863 14:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:48.123 14:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:35:48.123 14:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:35:48.123 true 00:35:48.384 14:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:48.384 14:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:48.384 14:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:48.646 14:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:35:48.646 14:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:35:48.907 true 00:35:48.907 14:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:48.907 14:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:48.907 14:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:49.167 14:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:35:49.167 14:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:35:49.428 true 00:35:49.428 14:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:49.428 14:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:49.688 14:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:49.688 14:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:35:49.688 14:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:35:49.948 true 00:35:49.948 14:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:49.948 14:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:50.209 14:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:50.209 14:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:35:50.209 14:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:35:50.469 true 00:35:50.469 14:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:50.469 14:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:50.729 14:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:50.990 14:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:35:50.990 14:33:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:35:50.990 true 00:35:50.990 14:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:50.990 14:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:51.250 14:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:51.511 14:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:35:51.511 14:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:35:51.511 true 00:35:51.511 14:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:51.511 14:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:51.772 14:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:52.032 14:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:35:52.032 14:33:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:35:52.032 true 00:35:52.032 14:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:52.032 14:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:52.293 14:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:52.553 14:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:35:52.553 14:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:35:52.553 true 00:35:52.814 14:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:52.814 14:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:52.814 14:33:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:53.076 14:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:35:53.076 14:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:35:53.342 true 00:35:53.342 14:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:53.342 14:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:53.342 14:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:53.723 14:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:35:53.723 14:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:35:54.041 true 00:35:54.041 14:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:54.041 14:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:54.041 14:33:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:54.327 14:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:35:54.327 14:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:35:54.327 true 00:35:54.327 14:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:54.327 14:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:54.588 14:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:54.848 14:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:35:54.848 14:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:35:54.848 true 00:35:54.849 14:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:54.849 14:33:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:55.109 14:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:55.370 14:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:35:55.370 14:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:35:55.370 true 00:35:55.370 14:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:55.370 14:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:55.630 14:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:55.891 14:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:35:55.891 14:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:35:55.891 true 00:35:56.151 14:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:56.151 14:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:56.151 14:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:56.411 14:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:35:56.411 14:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:35:56.672 true 00:35:56.672 14:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:56.672 14:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:56.672 14:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:56.934 14:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:35:56.934 14:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:35:57.195 true 00:35:57.195 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:57.195 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:57.456 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:57.456 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:35:57.456 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:35:57.717 true 00:35:57.717 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:57.717 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:57.979 14:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:57.979 14:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:35:57.979 14:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:35:58.240 true 00:35:58.240 14:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:58.240 14:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:58.502 14:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:58.502 14:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:35:58.502 14:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:35:58.763 true 00:35:58.763 14:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:58.763 14:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:59.025 14:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:59.285 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:35:59.285 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:35:59.285 true 00:35:59.285 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:59.285 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:59.547 Initializing NVMe Controllers 00:35:59.547 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:59.547 Controller IO queue size 128, less than required. 00:35:59.547 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:59.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:35:59.547 Initialization complete. Launching workers. 00:35:59.547 ======================================================== 00:35:59.547 Latency(us) 00:35:59.547 Device Information : IOPS MiB/s Average min max 00:35:59.547 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30351.86 14.82 4217.25 1101.86 11688.07 00:35:59.547 ======================================================== 00:35:59.547 Total : 30351.86 14.82 4217.25 1101.86 11688.07 00:35:59.547 00:35:59.547 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:59.807 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:35:59.807 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:35:59.807 true 00:35:59.807 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3636549 00:35:59.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3636549) - No such process 00:35:59.808 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3636549 00:35:59.808 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:00.067 14:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:00.067 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:36:00.067 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:36:00.067 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:36:00.067 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:00.067 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:36:00.328 null0 00:36:00.328 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:00.328 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:00.328 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:36:00.588 null1 00:36:00.588 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:00.588 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:00.588 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:36:00.588 null2 00:36:00.849 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:00.849 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:00.849 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:36:00.849 null3 00:36:00.849 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:00.849 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:00.849 14:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:36:01.110 null4 00:36:01.110 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:01.110 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:01.110 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:36:01.110 null5 00:36:01.371 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:01.371 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:01.371 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:36:01.371 null6 00:36:01.371 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:01.371 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:01.371 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:36:01.632 null7 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3642983 3642985 3642989 3642991 3642993 3642996 3642999 3643002 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:01.632 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:01.894 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:01.894 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:01.894 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:01.894 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:01.894 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:01.894 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:01.894 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:01.894 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:01.894 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:01.894 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:01.894 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:01.894 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:01.894 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:01.894 14:34:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:02.156 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:02.156 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:02.156 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:02.156 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:02.156 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:02.156 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:02.156 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:02.156 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:02.156 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:02.156 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:02.156 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:02.156 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:02.156 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:02.156 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:02.156 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:02.156 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:02.156 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:02.156 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:02.156 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:02.156 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:02.156 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:02.156 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:02.156 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:02.418 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:02.677 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:02.677 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:02.677 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:02.677 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:02.677 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:02.678 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:02.678 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:02.678 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:02.678 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:02.678 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:02.678 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:02.678 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:02.678 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:02.678 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:02.678 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:02.678 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:02.678 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:02.938 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:02.938 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:02.938 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:02.938 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:02.938 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:02.938 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:02.938 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:02.938 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:02.938 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:02.938 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:02.938 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:02.938 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:02.938 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:02.938 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:02.938 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:02.938 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:02.938 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:02.938 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:02.938 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:02.938 14:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:03.198 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:03.198 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:03.198 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:03.198 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:03.199 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:03.199 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:03.199 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:03.199 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:03.199 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:03.199 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:03.199 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:03.199 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:03.199 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:03.199 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:03.199 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:03.199 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:03.199 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:03.199 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:03.199 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:03.199 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:03.199 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:03.199 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:03.199 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:03.199 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:03.199 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:03.199 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:03.199 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:03.460 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:03.460 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:03.460 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:03.460 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:03.460 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:03.460 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:03.460 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:03.460 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:03.460 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:03.460 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:03.460 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:03.460 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:03.460 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:03.460 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:03.460 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:03.460 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:03.460 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:03.460 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:03.460 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:03.460 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:03.721 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:03.721 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:03.721 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:03.721 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:03.721 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:03.721 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:03.721 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:03.721 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:03.721 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:03.721 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:03.721 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:03.721 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:03.721 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:03.721 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:03.721 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:03.721 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:03.721 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:03.721 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:03.721 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:03.721 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:03.721 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:03.982 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:03.982 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:03.982 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:03.982 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:03.982 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:03.982 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:03.982 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:03.982 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:03.982 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:03.982 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:03.982 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:03.982 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:03.982 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:03.982 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:03.982 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:03.982 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:03.982 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:03.982 14:34:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:03.982 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:03.982 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:04.242 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:04.242 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:04.242 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:04.242 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:04.242 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:04.242 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:04.242 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:04.242 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:04.242 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:04.242 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:04.242 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:04.242 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:04.242 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:04.242 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:04.242 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:04.242 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:04.242 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:04.242 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:04.242 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:04.242 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:04.242 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:04.242 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:04.242 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:04.503 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:04.503 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:04.503 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:04.503 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:04.503 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:04.503 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:04.503 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:04.503 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:04.503 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:04.503 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:04.503 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:04.503 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:04.503 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:04.503 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:04.503 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:04.503 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:04.503 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:04.503 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:04.503 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:04.503 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:04.503 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:04.503 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:04.503 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:04.503 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:04.503 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:04.503 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:04.764 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:04.764 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:04.764 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:04.764 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:04.764 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:04.764 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:04.764 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:04.764 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:04.764 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:04.764 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:04.764 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:04.764 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:04.764 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:04.764 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:04.764 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:04.764 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:04.764 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:04.764 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:04.764 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:04.764 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:04.765 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:04.765 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:04.765 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:05.026 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:05.026 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:36:05.026 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:05.026 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:05.026 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:36:05.026 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:05.026 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:05.026 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:36:05.026 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:05.026 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:05.026 14:34:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:05.026 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:05.026 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:05.026 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:05.026 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:36:05.026 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:36:05.286 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:05.286 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:05.286 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:36:05.286 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:05.286 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:05.287 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:36:05.287 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:05.287 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:05.287 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:36:05.287 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:36:05.287 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:36:05.287 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:36:05.287 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:05.287 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:05.287 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:36:05.287 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:05.287 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:05.287 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:05.287 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:05.287 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:05.287 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:36:05.287 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:05.287 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:05.287 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:36:05.547 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:36:05.547 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:05.547 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:05.547 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:05.547 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:05.547 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:05.547 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:05.547 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:05.547 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:05.547 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:36:05.547 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:36:05.547 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:36:05.547 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:36:05.547 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:05.547 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:36:05.548 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:05.548 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:36:05.548 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:05.548 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:05.548 rmmod nvme_tcp 00:36:05.548 rmmod nvme_fabrics 00:36:05.808 rmmod nvme_keyring 00:36:05.808 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:05.808 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:36:05.808 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:36:05.808 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3636182 ']' 00:36:05.808 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3636182 00:36:05.808 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3636182 ']' 00:36:05.808 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3636182 00:36:05.808 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:36:05.808 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:05.808 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3636182 00:36:05.808 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:05.808 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:05.808 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3636182' 00:36:05.808 killing process with pid 3636182 00:36:05.809 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3636182 00:36:05.809 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3636182 00:36:05.809 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:05.809 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:05.809 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:05.809 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:36:05.809 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:36:05.809 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:05.809 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:36:05.809 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:05.809 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:05.809 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:05.809 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:05.809 14:34:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:08.354 14:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:08.354 00:36:08.354 real 0m49.249s 00:36:08.354 user 3m4.887s 00:36:08.354 sys 0m22.151s 00:36:08.354 14:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:08.354 14:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:08.354 ************************************ 00:36:08.354 END TEST nvmf_ns_hotplug_stress 00:36:08.354 ************************************ 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:08.354 ************************************ 00:36:08.354 START TEST nvmf_delete_subsystem 00:36:08.354 ************************************ 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:36:08.354 * Looking for test storage... 00:36:08.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:08.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:08.354 --rc genhtml_branch_coverage=1 00:36:08.354 --rc genhtml_function_coverage=1 00:36:08.354 --rc genhtml_legend=1 00:36:08.354 --rc geninfo_all_blocks=1 00:36:08.354 --rc geninfo_unexecuted_blocks=1 00:36:08.354 00:36:08.354 ' 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:08.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:08.354 --rc genhtml_branch_coverage=1 00:36:08.354 --rc genhtml_function_coverage=1 00:36:08.354 --rc genhtml_legend=1 00:36:08.354 --rc geninfo_all_blocks=1 00:36:08.354 --rc geninfo_unexecuted_blocks=1 00:36:08.354 00:36:08.354 ' 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:08.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:08.354 --rc genhtml_branch_coverage=1 00:36:08.354 --rc genhtml_function_coverage=1 00:36:08.354 --rc genhtml_legend=1 00:36:08.354 --rc geninfo_all_blocks=1 00:36:08.354 --rc geninfo_unexecuted_blocks=1 00:36:08.354 00:36:08.354 ' 00:36:08.354 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:08.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:08.354 --rc genhtml_branch_coverage=1 00:36:08.354 --rc genhtml_function_coverage=1 00:36:08.354 --rc genhtml_legend=1 00:36:08.354 --rc geninfo_all_blocks=1 00:36:08.354 --rc geninfo_unexecuted_blocks=1 00:36:08.354 00:36:08.354 ' 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:36:08.355 14:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:16.498 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:16.498 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:16.498 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:16.498 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:16.498 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:16.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:16.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.702 ms 00:36:16.499 00:36:16.499 --- 10.0.0.2 ping statistics --- 00:36:16.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:16.499 rtt min/avg/max/mdev = 0.702/0.702/0.702/0.000 ms 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:16.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:16.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:36:16.499 00:36:16.499 --- 10.0.0.1 ping statistics --- 00:36:16.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:16.499 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3648022 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3648022 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3648022 ']' 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:16.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:16.499 14:34:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:16.499 [2024-11-25 14:34:20.840732] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:16.499 [2024-11-25 14:34:20.841853] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:36:16.499 [2024-11-25 14:34:20.841904] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:16.499 [2024-11-25 14:34:20.943313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:16.499 [2024-11-25 14:34:20.996016] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:16.499 [2024-11-25 14:34:20.996068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:16.499 [2024-11-25 14:34:20.996077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:16.499 [2024-11-25 14:34:20.996085] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:16.499 [2024-11-25 14:34:20.996091] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:16.499 [2024-11-25 14:34:20.997789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:16.499 [2024-11-25 14:34:20.997789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:16.499 [2024-11-25 14:34:21.075446] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:16.499 [2024-11-25 14:34:21.076265] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:16.499 [2024-11-25 14:34:21.076504] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:16.761 [2024-11-25 14:34:21.702833] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:16.761 [2024-11-25 14:34:21.735284] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:16.761 NULL1 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:16.761 Delay0 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3648238 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:36:16.761 14:34:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:36:17.023 [2024-11-25 14:34:21.865706] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:36:18.941 14:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:18.941 14:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.941 14:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 starting I/O failed: -6 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 starting I/O failed: -6 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 starting I/O failed: -6 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 starting I/O failed: -6 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 starting I/O failed: -6 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 starting I/O failed: -6 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 starting I/O failed: -6 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 starting I/O failed: -6 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 starting I/O failed: -6 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 starting I/O failed: -6 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 starting I/O failed: -6 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 starting I/O failed: -6 00:36:19.204 [2024-11-25 14:34:24.070193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e072c0 is same with the state(6) to be set 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 starting I/O failed: -6 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 starting I/O failed: -6 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 starting I/O failed: -6 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 starting I/O failed: -6 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 starting I/O failed: -6 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 starting I/O failed: -6 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 starting I/O failed: -6 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 starting I/O failed: -6 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Write completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 starting I/O failed: -6 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 starting I/O failed: -6 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.204 starting I/O failed: -6 00:36:19.204 Read completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 starting I/O failed: -6 00:36:19.205 starting I/O failed: -6 00:36:19.205 Write completed with error (sct=0, sc=8) 00:36:19.205 Write completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Write completed with error (sct=0, sc=8) 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Write completed with error (sct=0, sc=8) 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 Write completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 Write completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Write completed with error (sct=0, sc=8) 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Write completed with error (sct=0, sc=8) 00:36:19.205 Write completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Write completed with error (sct=0, sc=8) 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 Write completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Write completed with error (sct=0, sc=8) 00:36:19.205 Write completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Write completed with error (sct=0, sc=8) 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 Write completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Write completed with error (sct=0, sc=8) 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 starting I/O failed: -6 00:36:19.205 Read completed with error (sct=0, sc=8) 00:36:19.205 [2024-11-25 14:34:24.075436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcc04000c80 is same with the state(6) to be set 00:36:20.149 [2024-11-25 14:34:25.046347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e089a0 is same with the state(6) to be set 00:36:20.149 Write completed with error (sct=0, sc=8) 00:36:20.149 Read completed with error (sct=0, sc=8) 00:36:20.149 Read completed with error (sct=0, sc=8) 00:36:20.149 Write completed with error (sct=0, sc=8) 00:36:20.149 Read completed with error (sct=0, sc=8) 00:36:20.149 Write completed with error (sct=0, sc=8) 00:36:20.149 Write completed with error (sct=0, sc=8) 00:36:20.149 Read completed with error (sct=0, sc=8) 00:36:20.149 Write completed with error (sct=0, sc=8) 00:36:20.149 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 [2024-11-25 14:34:25.073448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e074a0 is same with the state(6) to be set 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 [2024-11-25 14:34:25.074255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e07860 is same with the state(6) to be set 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 [2024-11-25 14:34:25.077364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcc0400d060 is same with the state(6) to be set 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 Read completed with error (sct=0, sc=8) 00:36:20.150 Write completed with error (sct=0, sc=8) 00:36:20.150 [2024-11-25 14:34:25.077567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fcc0400d800 is same with the state(6) to be set 00:36:20.150 Initializing NVMe Controllers 00:36:20.150 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:20.150 Controller IO queue size 128, less than required. 00:36:20.150 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:20.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:20.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:20.150 Initialization complete. Launching workers. 00:36:20.150 ======================================================== 00:36:20.150 Latency(us) 00:36:20.150 Device Information : IOPS MiB/s Average min max 00:36:20.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.69 0.08 890615.24 407.15 1007426.46 00:36:20.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 188.61 0.09 898927.74 461.26 1012866.25 00:36:20.150 ======================================================== 00:36:20.150 Total : 360.30 0.18 894966.67 407.15 1012866.25 00:36:20.150 00:36:20.150 [2024-11-25 14:34:25.078094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e089a0 (9): Bad file descriptor 00:36:20.150 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:36:20.150 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.150 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:36:20.150 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3648238 00:36:20.150 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3648238 00:36:20.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3648238) - No such process 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3648238 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3648238 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3648238 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:20.725 [2024-11-25 14:34:25.611113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3648915 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3648915 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:36:20.725 14:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:20.725 [2024-11-25 14:34:25.710152] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:36:21.299 14:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:21.299 14:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3648915 00:36:21.299 14:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:21.560 14:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:21.560 14:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3648915 00:36:21.560 14:34:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:22.133 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:22.133 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3648915 00:36:22.133 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:22.707 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:22.707 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3648915 00:36:22.707 14:34:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:23.280 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:23.280 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3648915 00:36:23.280 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:23.852 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:23.852 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3648915 00:36:23.852 14:34:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:23.852 Initializing NVMe Controllers 00:36:23.852 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:23.852 Controller IO queue size 128, less than required. 00:36:23.852 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:23.852 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:23.852 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:23.852 Initialization complete. Launching workers. 00:36:23.852 ======================================================== 00:36:23.852 Latency(us) 00:36:23.852 Device Information : IOPS MiB/s Average min max 00:36:23.852 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002100.63 1000193.69 1005848.41 00:36:23.852 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004669.10 1000368.37 1041638.41 00:36:23.852 ======================================================== 00:36:23.852 Total : 256.00 0.12 1003384.86 1000193.69 1041638.41 00:36:23.852 00:36:24.116 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:24.116 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3648915 00:36:24.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3648915) - No such process 00:36:24.116 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3648915 00:36:24.116 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:36:24.116 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:36:24.116 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:24.116 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:36:24.116 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:24.116 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:36:24.116 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:24.116 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:24.116 rmmod nvme_tcp 00:36:24.116 rmmod nvme_fabrics 00:36:24.116 rmmod nvme_keyring 00:36:24.375 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:24.375 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:36:24.375 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:36:24.376 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3648022 ']' 00:36:24.376 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3648022 00:36:24.376 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3648022 ']' 00:36:24.376 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3648022 00:36:24.376 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:36:24.376 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:24.376 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3648022 00:36:24.376 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:24.376 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:24.376 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3648022' 00:36:24.376 killing process with pid 3648022 00:36:24.376 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3648022 00:36:24.376 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3648022 00:36:24.376 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:24.376 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:24.376 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:24.376 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:36:24.376 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:36:24.376 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:24.376 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:36:24.376 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:24.376 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:24.376 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:24.376 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:24.376 14:34:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:26.924 00:36:26.924 real 0m18.437s 00:36:26.924 user 0m26.984s 00:36:26.924 sys 0m7.387s 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:26.924 ************************************ 00:36:26.924 END TEST nvmf_delete_subsystem 00:36:26.924 ************************************ 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:26.924 ************************************ 00:36:26.924 START TEST nvmf_host_management 00:36:26.924 ************************************ 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:36:26.924 * Looking for test storage... 00:36:26.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:26.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:26.924 --rc genhtml_branch_coverage=1 00:36:26.924 --rc genhtml_function_coverage=1 00:36:26.924 --rc genhtml_legend=1 00:36:26.924 --rc geninfo_all_blocks=1 00:36:26.924 --rc geninfo_unexecuted_blocks=1 00:36:26.924 00:36:26.924 ' 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:26.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:26.924 --rc genhtml_branch_coverage=1 00:36:26.924 --rc genhtml_function_coverage=1 00:36:26.924 --rc genhtml_legend=1 00:36:26.924 --rc geninfo_all_blocks=1 00:36:26.924 --rc geninfo_unexecuted_blocks=1 00:36:26.924 00:36:26.924 ' 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:26.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:26.924 --rc genhtml_branch_coverage=1 00:36:26.924 --rc genhtml_function_coverage=1 00:36:26.924 --rc genhtml_legend=1 00:36:26.924 --rc geninfo_all_blocks=1 00:36:26.924 --rc geninfo_unexecuted_blocks=1 00:36:26.924 00:36:26.924 ' 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:26.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:26.924 --rc genhtml_branch_coverage=1 00:36:26.924 --rc genhtml_function_coverage=1 00:36:26.924 --rc genhtml_legend=1 00:36:26.924 --rc geninfo_all_blocks=1 00:36:26.924 --rc geninfo_unexecuted_blocks=1 00:36:26.924 00:36:26.924 ' 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:26.924 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:36:26.925 14:34:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:35.069 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:35.069 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:35.069 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:35.070 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:35.070 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:35.070 14:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:35.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:35.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.517 ms 00:36:35.070 00:36:35.070 --- 10.0.0.2 ping statistics --- 00:36:35.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:35.070 rtt min/avg/max/mdev = 0.517/0.517/0.517/0.000 ms 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:35.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:35.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:36:35.070 00:36:35.070 --- 10.0.0.1 ping statistics --- 00:36:35.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:35.070 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3653818 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3653818 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3653818 ']' 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:35.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:35.070 14:34:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:35.070 [2024-11-25 14:34:39.370032] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:35.070 [2024-11-25 14:34:39.371275] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:36:35.070 [2024-11-25 14:34:39.371329] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:35.070 [2024-11-25 14:34:39.472121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:35.070 [2024-11-25 14:34:39.525960] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:35.070 [2024-11-25 14:34:39.526013] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:35.070 [2024-11-25 14:34:39.526027] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:35.070 [2024-11-25 14:34:39.526037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:35.070 [2024-11-25 14:34:39.526045] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:35.070 [2024-11-25 14:34:39.528489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:35.070 [2024-11-25 14:34:39.528655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:35.070 [2024-11-25 14:34:39.528814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:35.070 [2024-11-25 14:34:39.528814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:35.070 [2024-11-25 14:34:39.607461] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:35.070 [2024-11-25 14:34:39.608548] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:35.071 [2024-11-25 14:34:39.608897] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:35.071 [2024-11-25 14:34:39.609611] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:35.071 [2024-11-25 14:34:39.609647] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:35.333 [2024-11-25 14:34:40.229837] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:35.333 Malloc0 00:36:35.333 [2024-11-25 14:34:40.326051] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3653982 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3653982 /var/tmp/bdevperf.sock 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3653982 ']' 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:35.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:35.333 { 00:36:35.333 "params": { 00:36:35.333 "name": "Nvme$subsystem", 00:36:35.333 "trtype": "$TEST_TRANSPORT", 00:36:35.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:35.333 "adrfam": "ipv4", 00:36:35.333 "trsvcid": "$NVMF_PORT", 00:36:35.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:35.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:35.333 "hdgst": ${hdgst:-false}, 00:36:35.333 "ddgst": ${ddgst:-false} 00:36:35.333 }, 00:36:35.333 "method": "bdev_nvme_attach_controller" 00:36:35.333 } 00:36:35.333 EOF 00:36:35.333 )") 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:36:35.333 14:34:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:35.333 "params": { 00:36:35.333 "name": "Nvme0", 00:36:35.333 "trtype": "tcp", 00:36:35.333 "traddr": "10.0.0.2", 00:36:35.333 "adrfam": "ipv4", 00:36:35.333 "trsvcid": "4420", 00:36:35.333 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:35.333 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:35.333 "hdgst": false, 00:36:35.333 "ddgst": false 00:36:35.333 }, 00:36:35.333 "method": "bdev_nvme_attach_controller" 00:36:35.333 }' 00:36:35.596 [2024-11-25 14:34:40.434548] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:36:35.596 [2024-11-25 14:34:40.434621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3653982 ] 00:36:35.596 [2024-11-25 14:34:40.527785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:35.596 [2024-11-25 14:34:40.581728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:35.858 Running I/O for 10 seconds... 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.435 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:36.435 [2024-11-25 14:34:41.345410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.435 [2024-11-25 14:34:41.345892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.436 [2024-11-25 14:34:41.345899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.436 [2024-11-25 14:34:41.345907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.436 [2024-11-25 14:34:41.345918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.436 [2024-11-25 14:34:41.345925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.436 [2024-11-25 14:34:41.345933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2078210 is same with the state(6) to be set 00:36:36.436 [2024-11-25 14:34:41.346324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.436 [2024-11-25 14:34:41.346776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.436 [2024-11-25 14:34:41.346903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.436 [2024-11-25 14:34:41.346913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.346920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.346930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.346938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.346948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.346955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.346965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.346972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.346982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.346994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:36:36.437 [2024-11-25 14:34:41.347222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.437 [2024-11-25 14:34:41.347541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:36.437 [2024-11-25 14:34:41.347576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0b290 is same with the state(6) to be set 00:36:36.437 [2024-11-25 14:34:41.347710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:36.437 [2024-11-25 14:34:41.347725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:36.437 [2024-11-25 14:34:41.347743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.437 [2024-11-25 14:34:41.347752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:36.438 [2024-11-25 14:34:41.347760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.438 [2024-11-25 14:34:41.347769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:36.438 [2024-11-25 14:34:41.347777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.438 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:36.438 [2024-11-25 14:34:41.347785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf2080 is same with the state(6) to be set 00:36:36.438 [2024-11-25 14:34:41.349045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:36.438 task offset: 106496 on job bdev=Nvme0n1 fails 00:36:36.438 00:36:36.438 Latency(us) 00:36:36.438 [2024-11-25T13:34:41.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:36.438 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:36.438 Job: Nvme0n1 ended in about 0.60 seconds with error 00:36:36.438 Verification LBA range: start 0x0 length 0x400 00:36:36.438 Nvme0n1 : 0.60 1385.31 86.58 106.56 0.00 41917.67 5079.04 36700.16 00:36:36.438 [2024-11-25T13:34:41.528Z] =================================================================================================================== 00:36:36.438 [2024-11-25T13:34:41.528Z] Total : 1385.31 86.58 106.56 0.00 41917.67 5079.04 36700.16 00:36:36.438 [2024-11-25 14:34:41.351305] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:36.438 [2024-11-25 14:34:41.351342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf2080 (9): Bad file descriptor 00:36:36.438 [2024-11-25 14:34:41.352829] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:36:36.438 [2024-11-25 14:34:41.352934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:36:36.438 [2024-11-25 14:34:41.352968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:36.438 [2024-11-25 14:34:41.352988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:36:36.438 [2024-11-25 14:34:41.352998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:36:36.438 [2024-11-25 14:34:41.353005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.438 [2024-11-25 14:34:41.353013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1bf2080 00:36:36.438 [2024-11-25 14:34:41.353042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf2080 (9): Bad file descriptor 00:36:36.438 [2024-11-25 14:34:41.353056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:36:36.438 [2024-11-25 14:34:41.353065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:36:36.438 [2024-11-25 14:34:41.353075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:36:36.438 [2024-11-25 14:34:41.353087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:36:36.438 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.438 14:34:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:36:37.382 14:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3653982 00:36:37.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3653982) - No such process 00:36:37.382 14:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:36:37.382 14:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:36:37.382 14:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:36:37.382 14:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:36:37.382 14:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:36:37.382 14:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:36:37.382 14:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:37.382 14:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:37.382 { 00:36:37.382 "params": { 00:36:37.382 "name": "Nvme$subsystem", 00:36:37.382 "trtype": "$TEST_TRANSPORT", 00:36:37.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:37.382 "adrfam": "ipv4", 00:36:37.382 "trsvcid": "$NVMF_PORT", 00:36:37.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:37.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:37.382 "hdgst": ${hdgst:-false}, 00:36:37.382 "ddgst": ${ddgst:-false} 00:36:37.382 }, 00:36:37.382 "method": "bdev_nvme_attach_controller" 00:36:37.382 } 00:36:37.382 EOF 00:36:37.382 )") 00:36:37.382 14:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:36:37.382 14:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:36:37.382 14:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:36:37.382 14:34:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:37.382 "params": { 00:36:37.382 "name": "Nvme0", 00:36:37.382 "trtype": "tcp", 00:36:37.382 "traddr": "10.0.0.2", 00:36:37.382 "adrfam": "ipv4", 00:36:37.382 "trsvcid": "4420", 00:36:37.382 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:37.382 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:37.382 "hdgst": false, 00:36:37.382 "ddgst": false 00:36:37.382 }, 00:36:37.382 "method": "bdev_nvme_attach_controller" 00:36:37.382 }' 00:36:37.382 [2024-11-25 14:34:42.435150] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:36:37.382 [2024-11-25 14:34:42.435243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3654329 ] 00:36:37.643 [2024-11-25 14:34:42.530818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:37.643 [2024-11-25 14:34:42.582739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:37.904 Running I/O for 1 seconds... 00:36:38.847 1698.00 IOPS, 106.12 MiB/s 00:36:38.847 Latency(us) 00:36:38.847 [2024-11-25T13:34:43.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:38.847 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:38.847 Verification LBA range: start 0x0 length 0x400 00:36:38.847 Nvme0n1 : 1.01 1748.25 109.27 0.00 0.00 35902.49 2170.88 38666.24 00:36:38.847 [2024-11-25T13:34:43.937Z] =================================================================================================================== 00:36:38.847 [2024-11-25T13:34:43.937Z] Total : 1748.25 109.27 0.00 0.00 35902.49 2170.88 38666.24 00:36:39.108 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:36:39.108 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:36:39.108 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:36:39.109 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:36:39.109 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:36:39.109 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:39.109 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:36:39.109 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:39.109 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:36:39.109 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:39.109 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:39.109 rmmod nvme_tcp 00:36:39.109 rmmod nvme_fabrics 00:36:39.109 rmmod nvme_keyring 00:36:39.109 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:39.109 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:36:39.109 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:36:39.109 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3653818 ']' 00:36:39.109 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3653818 00:36:39.109 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3653818 ']' 00:36:39.109 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3653818 00:36:39.109 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:36:39.109 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:39.109 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3653818 00:36:39.109 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:39.109 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:39.109 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3653818' 00:36:39.109 killing process with pid 3653818 00:36:39.109 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3653818 00:36:39.109 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3653818 00:36:39.371 [2024-11-25 14:34:44.270526] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:36:39.371 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:39.371 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:39.371 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:39.371 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:36:39.371 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:36:39.371 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:39.371 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:36:39.371 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:39.371 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:39.371 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:39.371 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:39.371 14:34:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:41.284 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:41.545 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:36:41.545 00:36:41.545 real 0m14.810s 00:36:41.545 user 0m19.988s 00:36:41.545 sys 0m7.500s 00:36:41.545 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:41.545 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:41.545 ************************************ 00:36:41.545 END TEST nvmf_host_management 00:36:41.545 ************************************ 00:36:41.545 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:36:41.545 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:41.545 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:41.545 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:41.545 ************************************ 00:36:41.545 START TEST nvmf_lvol 00:36:41.545 ************************************ 00:36:41.545 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:36:41.545 * Looking for test storage... 00:36:41.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:41.545 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:41.545 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:41.545 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:36:41.805 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:41.805 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:41.805 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:41.805 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:41.805 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:36:41.805 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:41.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.806 --rc genhtml_branch_coverage=1 00:36:41.806 --rc genhtml_function_coverage=1 00:36:41.806 --rc genhtml_legend=1 00:36:41.806 --rc geninfo_all_blocks=1 00:36:41.806 --rc geninfo_unexecuted_blocks=1 00:36:41.806 00:36:41.806 ' 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:41.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.806 --rc genhtml_branch_coverage=1 00:36:41.806 --rc genhtml_function_coverage=1 00:36:41.806 --rc genhtml_legend=1 00:36:41.806 --rc geninfo_all_blocks=1 00:36:41.806 --rc geninfo_unexecuted_blocks=1 00:36:41.806 00:36:41.806 ' 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:41.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.806 --rc genhtml_branch_coverage=1 00:36:41.806 --rc genhtml_function_coverage=1 00:36:41.806 --rc genhtml_legend=1 00:36:41.806 --rc geninfo_all_blocks=1 00:36:41.806 --rc geninfo_unexecuted_blocks=1 00:36:41.806 00:36:41.806 ' 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:41.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.806 --rc genhtml_branch_coverage=1 00:36:41.806 --rc genhtml_function_coverage=1 00:36:41.806 --rc genhtml_legend=1 00:36:41.806 --rc geninfo_all_blocks=1 00:36:41.806 --rc geninfo_unexecuted_blocks=1 00:36:41.806 00:36:41.806 ' 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:36:41.806 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:41.807 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:41.807 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:41.807 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:41.807 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:41.807 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:41.807 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:41.807 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:41.807 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:41.807 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:41.807 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:36:41.807 14:34:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:50.059 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:50.060 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:50.060 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:50.060 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:50.060 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:50.060 14:34:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:50.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:50.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:36:50.060 00:36:50.060 --- 10.0.0.2 ping statistics --- 00:36:50.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:50.060 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:50.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:50.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:36:50.060 00:36:50.060 --- 10.0.0.1 ping statistics --- 00:36:50.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:50.060 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3658994 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3658994 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3658994 ']' 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:50.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:50.060 14:34:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:50.060 [2024-11-25 14:34:54.274186] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:50.060 [2024-11-25 14:34:54.275317] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:36:50.060 [2024-11-25 14:34:54.275369] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:50.060 [2024-11-25 14:34:54.374317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:50.061 [2024-11-25 14:34:54.426120] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:50.061 [2024-11-25 14:34:54.426178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:50.061 [2024-11-25 14:34:54.426188] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:50.061 [2024-11-25 14:34:54.426195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:50.061 [2024-11-25 14:34:54.426201] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:50.061 [2024-11-25 14:34:54.428228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:50.061 [2024-11-25 14:34:54.428390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:50.061 [2024-11-25 14:34:54.428391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:50.061 [2024-11-25 14:34:54.505018] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:50.061 [2024-11-25 14:34:54.506131] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:50.061 [2024-11-25 14:34:54.506876] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:50.061 [2024-11-25 14:34:54.506972] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:50.061 14:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:50.061 14:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:36:50.061 14:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:50.061 14:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:50.061 14:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:50.061 14:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:50.061 14:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:50.322 [2024-11-25 14:34:55.293477] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:50.322 14:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:50.582 14:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:36:50.582 14:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:50.843 14:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:36:50.843 14:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:36:51.103 14:34:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:36:51.103 14:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a716d4d3-cb96-4ecc-ae4a-9d486e1d92a5 00:36:51.103 14:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a716d4d3-cb96-4ecc-ae4a-9d486e1d92a5 lvol 20 00:36:51.363 14:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=5c2c65f6-6fa6-4586-adb7-930cc1bf48b2 00:36:51.363 14:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:51.624 14:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5c2c65f6-6fa6-4586-adb7-930cc1bf48b2 00:36:51.885 14:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:51.885 [2024-11-25 14:34:56.877361] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:51.885 14:34:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:52.146 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3659390 00:36:52.146 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:36:52.146 14:34:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:36:53.088 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 5c2c65f6-6fa6-4586-adb7-930cc1bf48b2 MY_SNAPSHOT 00:36:53.349 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1b65e1f0-35e2-48f1-b9aa-9732a16d71f8 00:36:53.349 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 5c2c65f6-6fa6-4586-adb7-930cc1bf48b2 30 00:36:53.609 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1b65e1f0-35e2-48f1-b9aa-9732a16d71f8 MY_CLONE 00:36:53.870 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a15660b2-6bb5-4b98-a022-e4507ba104d7 00:36:53.870 14:34:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a15660b2-6bb5-4b98-a022-e4507ba104d7 00:36:54.443 14:34:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3659390 00:37:02.588 Initializing NVMe Controllers 00:37:02.588 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:02.588 Controller IO queue size 128, less than required. 00:37:02.588 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:02.588 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:02.588 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:02.588 Initialization complete. Launching workers. 00:37:02.588 ======================================================== 00:37:02.588 Latency(us) 00:37:02.588 Device Information : IOPS MiB/s Average min max 00:37:02.588 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15166.30 59.24 8442.50 4374.34 72044.90 00:37:02.588 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15261.50 59.62 8389.21 3981.39 67698.59 00:37:02.588 ======================================================== 00:37:02.588 Total : 30427.80 118.86 8415.77 3981.39 72044.90 00:37:02.588 00:37:02.588 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:02.849 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5c2c65f6-6fa6-4586-adb7-930cc1bf48b2 00:37:02.850 14:35:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a716d4d3-cb96-4ecc-ae4a-9d486e1d92a5 00:37:03.111 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:37:03.111 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:37:03.111 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:37:03.111 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:03.111 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:37:03.111 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:03.111 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:37:03.111 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:03.111 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:03.111 rmmod nvme_tcp 00:37:03.111 rmmod nvme_fabrics 00:37:03.111 rmmod nvme_keyring 00:37:03.111 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:03.111 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:37:03.111 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:37:03.111 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3658994 ']' 00:37:03.111 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3658994 00:37:03.112 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3658994 ']' 00:37:03.112 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3658994 00:37:03.112 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:37:03.112 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:03.112 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3658994 00:37:03.112 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:03.112 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:03.112 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3658994' 00:37:03.112 killing process with pid 3658994 00:37:03.112 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3658994 00:37:03.112 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3658994 00:37:03.374 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:03.374 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:03.374 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:03.374 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:37:03.374 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:37:03.374 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:03.374 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:37:03.374 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:03.374 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:03.374 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:03.374 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:03.374 14:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:05.334 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:05.334 00:37:05.334 real 0m23.932s 00:37:05.334 user 0m56.331s 00:37:05.334 sys 0m10.580s 00:37:05.334 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:05.334 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:05.334 ************************************ 00:37:05.334 END TEST nvmf_lvol 00:37:05.334 ************************************ 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:05.596 ************************************ 00:37:05.596 START TEST nvmf_lvs_grow 00:37:05.596 ************************************ 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:37:05.596 * Looking for test storage... 00:37:05.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:05.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:05.596 --rc genhtml_branch_coverage=1 00:37:05.596 --rc genhtml_function_coverage=1 00:37:05.596 --rc genhtml_legend=1 00:37:05.596 --rc geninfo_all_blocks=1 00:37:05.596 --rc geninfo_unexecuted_blocks=1 00:37:05.596 00:37:05.596 ' 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:05.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:05.596 --rc genhtml_branch_coverage=1 00:37:05.596 --rc genhtml_function_coverage=1 00:37:05.596 --rc genhtml_legend=1 00:37:05.596 --rc geninfo_all_blocks=1 00:37:05.596 --rc geninfo_unexecuted_blocks=1 00:37:05.596 00:37:05.596 ' 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:05.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:05.596 --rc genhtml_branch_coverage=1 00:37:05.596 --rc genhtml_function_coverage=1 00:37:05.596 --rc genhtml_legend=1 00:37:05.596 --rc geninfo_all_blocks=1 00:37:05.596 --rc geninfo_unexecuted_blocks=1 00:37:05.596 00:37:05.596 ' 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:05.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:05.596 --rc genhtml_branch_coverage=1 00:37:05.596 --rc genhtml_function_coverage=1 00:37:05.596 --rc genhtml_legend=1 00:37:05.596 --rc geninfo_all_blocks=1 00:37:05.596 --rc geninfo_unexecuted_blocks=1 00:37:05.596 00:37:05.596 ' 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:05.596 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:37:05.858 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:05.858 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:05.858 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:05.858 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:05.858 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:05.858 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:05.858 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:37:05.859 14:35:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:14.014 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:14.014 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:37:14.014 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:14.014 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:14.014 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:14.014 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:14.014 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:14.014 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:37:14.014 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:14.014 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:37:14.014 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:37:14.014 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:37:14.014 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:37:14.014 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:37:14.014 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:37:14.014 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:14.014 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:14.014 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:14.014 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:14.014 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:14.014 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:14.014 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:14.015 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:14.015 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:14.015 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:14.015 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:14.015 14:35:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:14.015 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:14.015 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:14.015 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:14.015 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:14.015 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:14.015 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:14.015 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:14.015 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:14.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:14.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:37:14.015 00:37:14.015 --- 10.0.0.2 ping statistics --- 00:37:14.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.015 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:37:14.015 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:14.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:14.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:37:14.015 00:37:14.015 --- 10.0.0.1 ping statistics --- 00:37:14.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:14.015 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:37:14.015 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:14.015 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:37:14.015 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:14.015 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:14.015 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:14.015 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:14.015 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:14.015 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:14.015 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:14.015 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:37:14.015 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:14.015 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:14.015 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:14.015 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3665707 00:37:14.015 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3665707 00:37:14.016 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:14.016 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3665707 ']' 00:37:14.016 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:14.016 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:14.016 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:14.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:14.016 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:14.016 14:35:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:14.016 [2024-11-25 14:35:18.256715] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:14.016 [2024-11-25 14:35:18.257846] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:37:14.016 [2024-11-25 14:35:18.257899] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:14.016 [2024-11-25 14:35:18.359280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:14.016 [2024-11-25 14:35:18.410357] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:14.016 [2024-11-25 14:35:18.410410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:14.016 [2024-11-25 14:35:18.410419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:14.016 [2024-11-25 14:35:18.410427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:14.016 [2024-11-25 14:35:18.410433] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:14.016 [2024-11-25 14:35:18.411190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:14.016 [2024-11-25 14:35:18.487310] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:14.016 [2024-11-25 14:35:18.487611] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:14.016 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:14.016 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:37:14.016 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:14.016 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:14.016 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:14.278 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:14.278 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:14.278 [2024-11-25 14:35:19.304064] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:14.278 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:37:14.278 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:14.278 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:14.278 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:14.539 ************************************ 00:37:14.539 START TEST lvs_grow_clean 00:37:14.539 ************************************ 00:37:14.539 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:37:14.539 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:14.539 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:14.539 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:14.539 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:14.539 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:14.539 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:14.539 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:14.539 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:14.539 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:14.539 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:14.539 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:14.799 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8cdc259e-c678-43c8-b18c-46dda335adbc 00:37:14.799 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cdc259e-c678-43c8-b18c-46dda335adbc 00:37:14.799 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:15.060 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:15.060 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:15.060 14:35:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8cdc259e-c678-43c8-b18c-46dda335adbc lvol 150 00:37:15.320 14:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=af4d5d8d-5bde-4146-81a2-7438fb482966 00:37:15.320 14:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:15.320 14:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:15.320 [2024-11-25 14:35:20.331762] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:15.320 [2024-11-25 14:35:20.331935] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:15.320 true 00:37:15.320 14:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cdc259e-c678-43c8-b18c-46dda335adbc 00:37:15.320 14:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:15.581 14:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:15.581 14:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:15.843 14:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 af4d5d8d-5bde-4146-81a2-7438fb482966 00:37:15.843 14:35:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:16.104 [2024-11-25 14:35:21.100497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:16.104 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:16.365 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:16.365 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3666340 00:37:16.365 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:16.365 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3666340 /var/tmp/bdevperf.sock 00:37:16.365 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3666340 ']' 00:37:16.365 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:16.365 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:16.365 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:16.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:16.365 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:16.365 14:35:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:16.365 [2024-11-25 14:35:21.328605] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:37:16.365 [2024-11-25 14:35:21.328677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3666340 ] 00:37:16.365 [2024-11-25 14:35:21.421008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:16.627 [2024-11-25 14:35:21.474941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:17.200 14:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:17.200 14:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:37:17.200 14:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:17.773 Nvme0n1 00:37:17.773 14:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:17.773 [ 00:37:17.773 { 00:37:17.773 "name": "Nvme0n1", 00:37:17.773 "aliases": [ 00:37:17.773 "af4d5d8d-5bde-4146-81a2-7438fb482966" 00:37:17.773 ], 00:37:17.773 "product_name": "NVMe disk", 00:37:17.773 "block_size": 4096, 00:37:17.773 "num_blocks": 38912, 00:37:17.773 "uuid": "af4d5d8d-5bde-4146-81a2-7438fb482966", 00:37:17.773 "numa_id": 0, 00:37:17.773 "assigned_rate_limits": { 00:37:17.773 "rw_ios_per_sec": 0, 00:37:17.773 "rw_mbytes_per_sec": 0, 00:37:17.773 "r_mbytes_per_sec": 0, 00:37:17.773 "w_mbytes_per_sec": 0 00:37:17.773 }, 00:37:17.773 "claimed": false, 00:37:17.773 "zoned": false, 00:37:17.773 "supported_io_types": { 00:37:17.773 "read": true, 00:37:17.773 "write": true, 00:37:17.774 "unmap": true, 00:37:17.774 "flush": true, 00:37:17.774 "reset": true, 00:37:17.774 "nvme_admin": true, 00:37:17.774 "nvme_io": true, 00:37:17.774 "nvme_io_md": false, 00:37:17.774 "write_zeroes": true, 00:37:17.774 "zcopy": false, 00:37:17.774 "get_zone_info": false, 00:37:17.774 "zone_management": false, 00:37:17.774 "zone_append": false, 00:37:17.774 "compare": true, 00:37:17.774 "compare_and_write": true, 00:37:17.774 "abort": true, 00:37:17.774 "seek_hole": false, 00:37:17.774 "seek_data": false, 00:37:17.774 "copy": true, 00:37:17.774 "nvme_iov_md": false 00:37:17.774 }, 00:37:17.774 "memory_domains": [ 00:37:17.774 { 00:37:17.774 "dma_device_id": "system", 00:37:17.774 "dma_device_type": 1 00:37:17.774 } 00:37:17.774 ], 00:37:17.774 "driver_specific": { 00:37:17.774 "nvme": [ 00:37:17.774 { 00:37:17.774 "trid": { 00:37:17.774 "trtype": "TCP", 00:37:17.774 "adrfam": "IPv4", 00:37:17.774 "traddr": "10.0.0.2", 00:37:17.774 "trsvcid": "4420", 00:37:17.774 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:17.774 }, 00:37:17.774 "ctrlr_data": { 00:37:17.774 "cntlid": 1, 00:37:17.774 "vendor_id": "0x8086", 00:37:17.774 "model_number": "SPDK bdev Controller", 00:37:17.774 "serial_number": "SPDK0", 00:37:17.774 "firmware_revision": "25.01", 00:37:17.774 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:17.774 "oacs": { 00:37:17.774 "security": 0, 00:37:17.774 "format": 0, 00:37:17.774 "firmware": 0, 00:37:17.774 "ns_manage": 0 00:37:17.774 }, 00:37:17.774 "multi_ctrlr": true, 00:37:17.774 "ana_reporting": false 00:37:17.774 }, 00:37:17.774 "vs": { 00:37:17.774 "nvme_version": "1.3" 00:37:17.774 }, 00:37:17.774 "ns_data": { 00:37:17.774 "id": 1, 00:37:17.774 "can_share": true 00:37:17.774 } 00:37:17.774 } 00:37:17.774 ], 00:37:17.774 "mp_policy": "active_passive" 00:37:17.774 } 00:37:17.774 } 00:37:17.774 ] 00:37:17.774 14:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:17.774 14:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3666543 00:37:17.774 14:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:17.774 Running I/O for 10 seconds... 00:37:19.158 Latency(us) 00:37:19.158 [2024-11-25T13:35:24.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:19.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:19.158 Nvme0n1 : 1.00 16764.00 65.48 0.00 0.00 0.00 0.00 0.00 00:37:19.158 [2024-11-25T13:35:24.248Z] =================================================================================================================== 00:37:19.158 [2024-11-25T13:35:24.248Z] Total : 16764.00 65.48 0.00 0.00 0.00 0.00 0.00 00:37:19.158 00:37:19.730 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8cdc259e-c678-43c8-b18c-46dda335adbc 00:37:19.992 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:19.992 Nvme0n1 : 2.00 16954.50 66.23 0.00 0.00 0.00 0.00 0.00 00:37:19.992 [2024-11-25T13:35:25.082Z] =================================================================================================================== 00:37:19.992 [2024-11-25T13:35:25.082Z] Total : 16954.50 66.23 0.00 0.00 0.00 0.00 0.00 00:37:19.992 00:37:19.992 true 00:37:19.992 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cdc259e-c678-43c8-b18c-46dda335adbc 00:37:19.992 14:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:20.254 14:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:20.254 14:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:20.254 14:35:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3666543 00:37:20.826 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:20.826 Nvme0n1 : 3.00 17166.33 67.06 0.00 0.00 0.00 0.00 0.00 00:37:20.826 [2024-11-25T13:35:25.916Z] =================================================================================================================== 00:37:20.826 [2024-11-25T13:35:25.916Z] Total : 17166.33 67.06 0.00 0.00 0.00 0.00 0.00 00:37:20.826 00:37:22.212 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:22.212 Nvme0n1 : 4.00 17954.75 70.14 0.00 0.00 0.00 0.00 0.00 00:37:22.212 [2024-11-25T13:35:27.302Z] =================================================================================================================== 00:37:22.212 [2024-11-25T13:35:27.302Z] Total : 17954.75 70.14 0.00 0.00 0.00 0.00 0.00 00:37:22.212 00:37:23.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:23.154 Nvme0n1 : 5.00 19440.80 75.94 0.00 0.00 0.00 0.00 0.00 00:37:23.154 [2024-11-25T13:35:28.244Z] =================================================================================================================== 00:37:23.154 [2024-11-25T13:35:28.244Z] Total : 19440.80 75.94 0.00 0.00 0.00 0.00 0.00 00:37:23.154 00:37:24.097 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:24.097 Nvme0n1 : 6.00 20434.00 79.82 0.00 0.00 0.00 0.00 0.00 00:37:24.097 [2024-11-25T13:35:29.187Z] =================================================================================================================== 00:37:24.097 [2024-11-25T13:35:29.187Z] Total : 20434.00 79.82 0.00 0.00 0.00 0.00 0.00 00:37:24.097 00:37:25.039 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:25.039 Nvme0n1 : 7.00 21143.43 82.59 0.00 0.00 0.00 0.00 0.00 00:37:25.039 [2024-11-25T13:35:30.129Z] =================================================================================================================== 00:37:25.039 [2024-11-25T13:35:30.129Z] Total : 21143.43 82.59 0.00 0.00 0.00 0.00 0.00 00:37:25.039 00:37:25.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:25.983 Nvme0n1 : 8.00 21679.75 84.69 0.00 0.00 0.00 0.00 0.00 00:37:25.983 [2024-11-25T13:35:31.073Z] =================================================================================================================== 00:37:25.983 [2024-11-25T13:35:31.073Z] Total : 21679.75 84.69 0.00 0.00 0.00 0.00 0.00 00:37:25.983 00:37:26.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:26.925 Nvme0n1 : 9.00 22093.11 86.30 0.00 0.00 0.00 0.00 0.00 00:37:26.925 [2024-11-25T13:35:32.015Z] =================================================================================================================== 00:37:26.925 [2024-11-25T13:35:32.015Z] Total : 22093.11 86.30 0.00 0.00 0.00 0.00 0.00 00:37:26.925 00:37:27.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:27.868 Nvme0n1 : 10.00 22436.50 87.64 0.00 0.00 0.00 0.00 0.00 00:37:27.868 [2024-11-25T13:35:32.958Z] =================================================================================================================== 00:37:27.868 [2024-11-25T13:35:32.958Z] Total : 22436.50 87.64 0.00 0.00 0.00 0.00 0.00 00:37:27.868 00:37:27.868 00:37:27.868 Latency(us) 00:37:27.868 [2024-11-25T13:35:32.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:27.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:27.868 Nvme0n1 : 10.00 22434.81 87.64 0.00 0.00 5701.68 2580.48 32549.55 00:37:27.868 [2024-11-25T13:35:32.958Z] =================================================================================================================== 00:37:27.868 [2024-11-25T13:35:32.958Z] Total : 22434.81 87.64 0.00 0.00 5701.68 2580.48 32549.55 00:37:27.868 { 00:37:27.868 "results": [ 00:37:27.868 { 00:37:27.868 "job": "Nvme0n1", 00:37:27.868 "core_mask": "0x2", 00:37:27.868 "workload": "randwrite", 00:37:27.868 "status": "finished", 00:37:27.868 "queue_depth": 128, 00:37:27.868 "io_size": 4096, 00:37:27.868 "runtime": 10.00365, 00:37:27.868 "iops": 22434.811293877734, 00:37:27.868 "mibps": 87.6359816167099, 00:37:27.868 "io_failed": 0, 00:37:27.868 "io_timeout": 0, 00:37:27.868 "avg_latency_us": 5701.684774881552, 00:37:27.868 "min_latency_us": 2580.48, 00:37:27.868 "max_latency_us": 32549.546666666665 00:37:27.868 } 00:37:27.868 ], 00:37:27.868 "core_count": 1 00:37:27.868 } 00:37:27.868 14:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3666340 00:37:27.868 14:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3666340 ']' 00:37:27.868 14:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3666340 00:37:27.868 14:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:37:27.868 14:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:27.868 14:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3666340 00:37:28.129 14:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:28.129 14:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:28.129 14:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3666340' 00:37:28.129 killing process with pid 3666340 00:37:28.129 14:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3666340 00:37:28.129 Received shutdown signal, test time was about 10.000000 seconds 00:37:28.129 00:37:28.129 Latency(us) 00:37:28.129 [2024-11-25T13:35:33.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:28.129 [2024-11-25T13:35:33.219Z] =================================================================================================================== 00:37:28.129 [2024-11-25T13:35:33.219Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:28.129 14:35:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3666340 00:37:28.129 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:28.390 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:28.390 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cdc259e-c678-43c8-b18c-46dda335adbc 00:37:28.390 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:37:28.652 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:37:28.652 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:37:28.652 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:28.913 [2024-11-25 14:35:33.779801] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:37:28.913 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cdc259e-c678-43c8-b18c-46dda335adbc 00:37:28.913 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:37:28.913 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cdc259e-c678-43c8-b18c-46dda335adbc 00:37:28.913 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:28.913 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:28.913 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:28.913 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:28.913 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:28.913 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:28.913 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:28.913 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:37:28.913 14:35:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cdc259e-c678-43c8-b18c-46dda335adbc 00:37:28.913 request: 00:37:28.913 { 00:37:28.913 "uuid": "8cdc259e-c678-43c8-b18c-46dda335adbc", 00:37:28.913 "method": "bdev_lvol_get_lvstores", 00:37:28.913 "req_id": 1 00:37:28.913 } 00:37:28.913 Got JSON-RPC error response 00:37:28.913 response: 00:37:28.913 { 00:37:28.913 "code": -19, 00:37:28.913 "message": "No such device" 00:37:28.913 } 00:37:29.175 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:37:29.175 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:29.175 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:29.175 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:29.175 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:29.175 aio_bdev 00:37:29.175 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev af4d5d8d-5bde-4146-81a2-7438fb482966 00:37:29.175 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=af4d5d8d-5bde-4146-81a2-7438fb482966 00:37:29.175 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:29.175 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:37:29.175 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:29.175 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:29.175 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:29.436 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b af4d5d8d-5bde-4146-81a2-7438fb482966 -t 2000 00:37:29.697 [ 00:37:29.697 { 00:37:29.697 "name": "af4d5d8d-5bde-4146-81a2-7438fb482966", 00:37:29.697 "aliases": [ 00:37:29.697 "lvs/lvol" 00:37:29.697 ], 00:37:29.697 "product_name": "Logical Volume", 00:37:29.697 "block_size": 4096, 00:37:29.697 "num_blocks": 38912, 00:37:29.697 "uuid": "af4d5d8d-5bde-4146-81a2-7438fb482966", 00:37:29.697 "assigned_rate_limits": { 00:37:29.697 "rw_ios_per_sec": 0, 00:37:29.697 "rw_mbytes_per_sec": 0, 00:37:29.697 "r_mbytes_per_sec": 0, 00:37:29.697 "w_mbytes_per_sec": 0 00:37:29.697 }, 00:37:29.697 "claimed": false, 00:37:29.697 "zoned": false, 00:37:29.697 "supported_io_types": { 00:37:29.697 "read": true, 00:37:29.697 "write": true, 00:37:29.697 "unmap": true, 00:37:29.697 "flush": false, 00:37:29.697 "reset": true, 00:37:29.697 "nvme_admin": false, 00:37:29.697 "nvme_io": false, 00:37:29.697 "nvme_io_md": false, 00:37:29.697 "write_zeroes": true, 00:37:29.697 "zcopy": false, 00:37:29.697 "get_zone_info": false, 00:37:29.697 "zone_management": false, 00:37:29.697 "zone_append": false, 00:37:29.697 "compare": false, 00:37:29.697 "compare_and_write": false, 00:37:29.697 "abort": false, 00:37:29.697 "seek_hole": true, 00:37:29.697 "seek_data": true, 00:37:29.697 "copy": false, 00:37:29.697 "nvme_iov_md": false 00:37:29.697 }, 00:37:29.697 "driver_specific": { 00:37:29.697 "lvol": { 00:37:29.697 "lvol_store_uuid": "8cdc259e-c678-43c8-b18c-46dda335adbc", 00:37:29.697 "base_bdev": "aio_bdev", 00:37:29.697 "thin_provision": false, 00:37:29.697 "num_allocated_clusters": 38, 00:37:29.697 "snapshot": false, 00:37:29.697 "clone": false, 00:37:29.697 "esnap_clone": false 00:37:29.697 } 00:37:29.697 } 00:37:29.697 } 00:37:29.697 ] 00:37:29.697 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:37:29.697 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cdc259e-c678-43c8-b18c-46dda335adbc 00:37:29.697 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:37:29.697 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:37:29.697 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cdc259e-c678-43c8-b18c-46dda335adbc 00:37:29.697 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:37:29.957 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:37:29.957 14:35:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete af4d5d8d-5bde-4146-81a2-7438fb482966 00:37:30.219 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8cdc259e-c678-43c8-b18c-46dda335adbc 00:37:30.219 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:30.481 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:30.481 00:37:30.481 real 0m16.083s 00:37:30.481 user 0m15.640s 00:37:30.481 sys 0m1.522s 00:37:30.481 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:30.481 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:30.481 ************************************ 00:37:30.481 END TEST lvs_grow_clean 00:37:30.481 ************************************ 00:37:30.481 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:37:30.481 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:30.481 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:30.481 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:30.481 ************************************ 00:37:30.481 START TEST lvs_grow_dirty 00:37:30.481 ************************************ 00:37:30.481 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:37:30.481 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:30.481 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:30.481 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:30.481 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:30.481 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:30.481 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:30.481 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:30.481 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:30.481 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:30.743 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:30.743 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:31.003 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f9e56309-5ad9-465c-b92c-b37e65ceb142 00:37:31.003 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9e56309-5ad9-465c-b92c-b37e65ceb142 00:37:31.003 14:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:31.263 14:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:31.263 14:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:31.263 14:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f9e56309-5ad9-465c-b92c-b37e65ceb142 lvol 150 00:37:31.263 14:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0b571c1f-750b-48c0-ac6b-c897de3f26af 00:37:31.263 14:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:31.264 14:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:31.524 [2024-11-25 14:35:36.459731] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:31.524 [2024-11-25 14:35:36.459883] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:31.524 true 00:37:31.524 14:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:31.524 14:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9e56309-5ad9-465c-b92c-b37e65ceb142 00:37:31.785 14:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:31.785 14:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:31.785 14:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0b571c1f-750b-48c0-ac6b-c897de3f26af 00:37:32.047 14:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:32.309 [2024-11-25 14:35:37.148390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:32.309 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:32.309 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3669342 00:37:32.309 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:32.309 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:32.309 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3669342 /var/tmp/bdevperf.sock 00:37:32.309 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3669342 ']' 00:37:32.309 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:32.309 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:32.309 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:32.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:32.309 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:32.309 14:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:32.309 [2024-11-25 14:35:37.384970] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:37:32.309 [2024-11-25 14:35:37.385025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3669342 ] 00:37:32.571 [2024-11-25 14:35:37.469408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:32.571 [2024-11-25 14:35:37.499218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:33.142 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:33.142 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:37:33.142 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:33.714 Nvme0n1 00:37:33.714 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:33.714 [ 00:37:33.714 { 00:37:33.714 "name": "Nvme0n1", 00:37:33.714 "aliases": [ 00:37:33.714 "0b571c1f-750b-48c0-ac6b-c897de3f26af" 00:37:33.714 ], 00:37:33.714 "product_name": "NVMe disk", 00:37:33.714 "block_size": 4096, 00:37:33.714 "num_blocks": 38912, 00:37:33.714 "uuid": "0b571c1f-750b-48c0-ac6b-c897de3f26af", 00:37:33.714 "numa_id": 0, 00:37:33.714 "assigned_rate_limits": { 00:37:33.714 "rw_ios_per_sec": 0, 00:37:33.714 "rw_mbytes_per_sec": 0, 00:37:33.714 "r_mbytes_per_sec": 0, 00:37:33.714 "w_mbytes_per_sec": 0 00:37:33.714 }, 00:37:33.714 "claimed": false, 00:37:33.714 "zoned": false, 00:37:33.714 "supported_io_types": { 00:37:33.714 "read": true, 00:37:33.714 "write": true, 00:37:33.714 "unmap": true, 00:37:33.714 "flush": true, 00:37:33.714 "reset": true, 00:37:33.714 "nvme_admin": true, 00:37:33.714 "nvme_io": true, 00:37:33.714 "nvme_io_md": false, 00:37:33.714 "write_zeroes": true, 00:37:33.714 "zcopy": false, 00:37:33.714 "get_zone_info": false, 00:37:33.714 "zone_management": false, 00:37:33.714 "zone_append": false, 00:37:33.714 "compare": true, 00:37:33.714 "compare_and_write": true, 00:37:33.714 "abort": true, 00:37:33.714 "seek_hole": false, 00:37:33.714 "seek_data": false, 00:37:33.714 "copy": true, 00:37:33.714 "nvme_iov_md": false 00:37:33.714 }, 00:37:33.714 "memory_domains": [ 00:37:33.714 { 00:37:33.714 "dma_device_id": "system", 00:37:33.714 "dma_device_type": 1 00:37:33.714 } 00:37:33.714 ], 00:37:33.714 "driver_specific": { 00:37:33.714 "nvme": [ 00:37:33.714 { 00:37:33.714 "trid": { 00:37:33.714 "trtype": "TCP", 00:37:33.714 "adrfam": "IPv4", 00:37:33.714 "traddr": "10.0.0.2", 00:37:33.714 "trsvcid": "4420", 00:37:33.714 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:33.714 }, 00:37:33.714 "ctrlr_data": { 00:37:33.714 "cntlid": 1, 00:37:33.714 "vendor_id": "0x8086", 00:37:33.715 "model_number": "SPDK bdev Controller", 00:37:33.715 "serial_number": "SPDK0", 00:37:33.715 "firmware_revision": "25.01", 00:37:33.715 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:33.715 "oacs": { 00:37:33.715 "security": 0, 00:37:33.715 "format": 0, 00:37:33.715 "firmware": 0, 00:37:33.715 "ns_manage": 0 00:37:33.715 }, 00:37:33.715 "multi_ctrlr": true, 00:37:33.715 "ana_reporting": false 00:37:33.715 }, 00:37:33.715 "vs": { 00:37:33.715 "nvme_version": "1.3" 00:37:33.715 }, 00:37:33.715 "ns_data": { 00:37:33.715 "id": 1, 00:37:33.715 "can_share": true 00:37:33.715 } 00:37:33.715 } 00:37:33.715 ], 00:37:33.715 "mp_policy": "active_passive" 00:37:33.715 } 00:37:33.715 } 00:37:33.715 ] 00:37:33.715 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3669524 00:37:33.715 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:33.715 14:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:33.976 Running I/O for 10 seconds... 00:37:34.921 Latency(us) 00:37:34.921 [2024-11-25T13:35:40.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:34.921 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:34.921 Nvme0n1 : 1.00 17035.00 66.54 0.00 0.00 0.00 0.00 0.00 00:37:34.921 [2024-11-25T13:35:40.011Z] =================================================================================================================== 00:37:34.921 [2024-11-25T13:35:40.011Z] Total : 17035.00 66.54 0.00 0.00 0.00 0.00 0.00 00:37:34.921 00:37:35.863 14:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f9e56309-5ad9-465c-b92c-b37e65ceb142 00:37:35.863 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:35.863 Nvme0n1 : 2.00 17253.50 67.40 0.00 0.00 0.00 0.00 0.00 00:37:35.863 [2024-11-25T13:35:40.953Z] =================================================================================================================== 00:37:35.863 [2024-11-25T13:35:40.953Z] Total : 17253.50 67.40 0.00 0.00 0.00 0.00 0.00 00:37:35.863 00:37:35.863 true 00:37:35.863 14:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9e56309-5ad9-465c-b92c-b37e65ceb142 00:37:35.863 14:35:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:36.124 14:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:36.124 14:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:36.124 14:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3669524 00:37:37.067 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:37.067 Nvme0n1 : 3.00 17342.33 67.74 0.00 0.00 0.00 0.00 0.00 00:37:37.067 [2024-11-25T13:35:42.157Z] =================================================================================================================== 00:37:37.067 [2024-11-25T13:35:42.157Z] Total : 17342.33 67.74 0.00 0.00 0.00 0.00 0.00 00:37:37.067 00:37:38.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:38.010 Nvme0n1 : 4.00 17402.75 67.98 0.00 0.00 0.00 0.00 0.00 00:37:38.010 [2024-11-25T13:35:43.100Z] =================================================================================================================== 00:37:38.010 [2024-11-25T13:35:43.100Z] Total : 17402.75 67.98 0.00 0.00 0.00 0.00 0.00 00:37:38.010 00:37:38.953 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:38.953 Nvme0n1 : 5.00 17579.80 68.67 0.00 0.00 0.00 0.00 0.00 00:37:38.953 [2024-11-25T13:35:44.043Z] =================================================================================================================== 00:37:38.953 [2024-11-25T13:35:44.043Z] Total : 17579.80 68.67 0.00 0.00 0.00 0.00 0.00 00:37:38.953 00:37:39.895 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:39.895 Nvme0n1 : 6.00 18769.83 73.32 0.00 0.00 0.00 0.00 0.00 00:37:39.895 [2024-11-25T13:35:44.985Z] =================================================================================================================== 00:37:39.895 [2024-11-25T13:35:44.985Z] Total : 18769.83 73.32 0.00 0.00 0.00 0.00 0.00 00:37:39.895 00:37:40.840 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:40.840 Nvme0n1 : 7.00 19631.29 76.68 0.00 0.00 0.00 0.00 0.00 00:37:40.840 [2024-11-25T13:35:45.930Z] =================================================================================================================== 00:37:40.840 [2024-11-25T13:35:45.930Z] Total : 19631.29 76.68 0.00 0.00 0.00 0.00 0.00 00:37:40.840 00:37:41.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:41.874 Nvme0n1 : 8.00 20275.38 79.20 0.00 0.00 0.00 0.00 0.00 00:37:41.874 [2024-11-25T13:35:46.964Z] =================================================================================================================== 00:37:41.874 [2024-11-25T13:35:46.964Z] Total : 20275.38 79.20 0.00 0.00 0.00 0.00 0.00 00:37:41.874 00:37:42.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:42.813 Nvme0n1 : 9.00 20778.11 81.16 0.00 0.00 0.00 0.00 0.00 00:37:42.813 [2024-11-25T13:35:47.903Z] =================================================================================================================== 00:37:42.813 [2024-11-25T13:35:47.903Z] Total : 20778.11 81.16 0.00 0.00 0.00 0.00 0.00 00:37:42.813 00:37:44.199 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:44.199 Nvme0n1 : 10.00 21181.90 82.74 0.00 0.00 0.00 0.00 0.00 00:37:44.199 [2024-11-25T13:35:49.289Z] =================================================================================================================== 00:37:44.199 [2024-11-25T13:35:49.290Z] Total : 21181.90 82.74 0.00 0.00 0.00 0.00 0.00 00:37:44.200 00:37:44.200 00:37:44.200 Latency(us) 00:37:44.200 [2024-11-25T13:35:49.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:44.200 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:44.200 Nvme0n1 : 10.01 21183.88 82.75 0.00 0.00 6038.51 3577.17 22500.69 00:37:44.200 [2024-11-25T13:35:49.290Z] =================================================================================================================== 00:37:44.200 [2024-11-25T13:35:49.290Z] Total : 21183.88 82.75 0.00 0.00 6038.51 3577.17 22500.69 00:37:44.200 { 00:37:44.200 "results": [ 00:37:44.200 { 00:37:44.200 "job": "Nvme0n1", 00:37:44.200 "core_mask": "0x2", 00:37:44.200 "workload": "randwrite", 00:37:44.200 "status": "finished", 00:37:44.200 "queue_depth": 128, 00:37:44.200 "io_size": 4096, 00:37:44.200 "runtime": 10.005106, 00:37:44.200 "iops": 21183.883509080264, 00:37:44.200 "mibps": 82.74954495734478, 00:37:44.200 "io_failed": 0, 00:37:44.200 "io_timeout": 0, 00:37:44.200 "avg_latency_us": 6038.510837269066, 00:37:44.200 "min_latency_us": 3577.173333333333, 00:37:44.200 "max_latency_us": 22500.693333333333 00:37:44.200 } 00:37:44.200 ], 00:37:44.200 "core_count": 1 00:37:44.200 } 00:37:44.200 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3669342 00:37:44.200 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3669342 ']' 00:37:44.200 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3669342 00:37:44.200 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:37:44.200 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:44.200 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3669342 00:37:44.200 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:44.200 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:44.200 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3669342' 00:37:44.200 killing process with pid 3669342 00:37:44.200 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3669342 00:37:44.200 Received shutdown signal, test time was about 10.000000 seconds 00:37:44.200 00:37:44.200 Latency(us) 00:37:44.200 [2024-11-25T13:35:49.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:44.200 [2024-11-25T13:35:49.290Z] =================================================================================================================== 00:37:44.200 [2024-11-25T13:35:49.290Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:44.200 14:35:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3669342 00:37:44.200 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:44.200 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:44.460 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9e56309-5ad9-465c-b92c-b37e65ceb142 00:37:44.460 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:37:44.720 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:37:44.720 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:37:44.720 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3665707 00:37:44.720 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3665707 00:37:44.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3665707 Killed "${NVMF_APP[@]}" "$@" 00:37:44.720 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:37:44.720 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:37:44.720 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:44.720 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:44.720 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:44.720 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3671594 00:37:44.720 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3671594 00:37:44.720 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:44.720 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3671594 ']' 00:37:44.720 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:44.720 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:44.721 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:44.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:44.721 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:44.721 14:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:44.721 [2024-11-25 14:35:49.702311] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:44.721 [2024-11-25 14:35:49.703610] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:37:44.721 [2024-11-25 14:35:49.703665] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:44.721 [2024-11-25 14:35:49.799263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:44.981 [2024-11-25 14:35:49.831655] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:44.981 [2024-11-25 14:35:49.831686] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:44.981 [2024-11-25 14:35:49.831692] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:44.981 [2024-11-25 14:35:49.831697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:44.981 [2024-11-25 14:35:49.831701] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:44.981 [2024-11-25 14:35:49.832173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:44.981 [2024-11-25 14:35:49.883713] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:44.981 [2024-11-25 14:35:49.883905] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:45.552 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:45.552 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:37:45.552 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:45.552 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:45.552 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:45.552 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:45.552 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:45.812 [2024-11-25 14:35:50.702241] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:37:45.812 [2024-11-25 14:35:50.702462] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:37:45.812 [2024-11-25 14:35:50.702559] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:37:45.812 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:37:45.812 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0b571c1f-750b-48c0-ac6b-c897de3f26af 00:37:45.812 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0b571c1f-750b-48c0-ac6b-c897de3f26af 00:37:45.812 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:45.812 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:37:45.812 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:45.812 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:45.812 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:46.072 14:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0b571c1f-750b-48c0-ac6b-c897de3f26af -t 2000 00:37:46.072 [ 00:37:46.072 { 00:37:46.072 "name": "0b571c1f-750b-48c0-ac6b-c897de3f26af", 00:37:46.072 "aliases": [ 00:37:46.072 "lvs/lvol" 00:37:46.072 ], 00:37:46.072 "product_name": "Logical Volume", 00:37:46.072 "block_size": 4096, 00:37:46.072 "num_blocks": 38912, 00:37:46.072 "uuid": "0b571c1f-750b-48c0-ac6b-c897de3f26af", 00:37:46.072 "assigned_rate_limits": { 00:37:46.072 "rw_ios_per_sec": 0, 00:37:46.072 "rw_mbytes_per_sec": 0, 00:37:46.072 "r_mbytes_per_sec": 0, 00:37:46.072 "w_mbytes_per_sec": 0 00:37:46.072 }, 00:37:46.072 "claimed": false, 00:37:46.072 "zoned": false, 00:37:46.072 "supported_io_types": { 00:37:46.072 "read": true, 00:37:46.072 "write": true, 00:37:46.072 "unmap": true, 00:37:46.072 "flush": false, 00:37:46.072 "reset": true, 00:37:46.072 "nvme_admin": false, 00:37:46.072 "nvme_io": false, 00:37:46.072 "nvme_io_md": false, 00:37:46.072 "write_zeroes": true, 00:37:46.072 "zcopy": false, 00:37:46.072 "get_zone_info": false, 00:37:46.072 "zone_management": false, 00:37:46.072 "zone_append": false, 00:37:46.072 "compare": false, 00:37:46.072 "compare_and_write": false, 00:37:46.072 "abort": false, 00:37:46.072 "seek_hole": true, 00:37:46.072 "seek_data": true, 00:37:46.072 "copy": false, 00:37:46.072 "nvme_iov_md": false 00:37:46.072 }, 00:37:46.072 "driver_specific": { 00:37:46.072 "lvol": { 00:37:46.072 "lvol_store_uuid": "f9e56309-5ad9-465c-b92c-b37e65ceb142", 00:37:46.072 "base_bdev": "aio_bdev", 00:37:46.072 "thin_provision": false, 00:37:46.072 "num_allocated_clusters": 38, 00:37:46.072 "snapshot": false, 00:37:46.072 "clone": false, 00:37:46.072 "esnap_clone": false 00:37:46.072 } 00:37:46.072 } 00:37:46.072 } 00:37:46.072 ] 00:37:46.072 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:37:46.072 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9e56309-5ad9-465c-b92c-b37e65ceb142 00:37:46.072 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:37:46.388 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:37:46.388 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9e56309-5ad9-465c-b92c-b37e65ceb142 00:37:46.388 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:37:46.388 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:37:46.388 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:46.648 [2024-11-25 14:35:51.596668] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:37:46.648 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9e56309-5ad9-465c-b92c-b37e65ceb142 00:37:46.648 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:37:46.648 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9e56309-5ad9-465c-b92c-b37e65ceb142 00:37:46.648 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:46.648 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:46.648 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:46.648 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:46.648 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:46.648 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:46.648 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:46.648 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:37:46.648 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9e56309-5ad9-465c-b92c-b37e65ceb142 00:37:46.908 request: 00:37:46.908 { 00:37:46.908 "uuid": "f9e56309-5ad9-465c-b92c-b37e65ceb142", 00:37:46.908 "method": "bdev_lvol_get_lvstores", 00:37:46.908 "req_id": 1 00:37:46.908 } 00:37:46.908 Got JSON-RPC error response 00:37:46.908 response: 00:37:46.908 { 00:37:46.908 "code": -19, 00:37:46.908 "message": "No such device" 00:37:46.908 } 00:37:46.908 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:37:46.908 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:46.908 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:46.909 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:46.909 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:46.909 aio_bdev 00:37:46.909 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0b571c1f-750b-48c0-ac6b-c897de3f26af 00:37:46.909 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0b571c1f-750b-48c0-ac6b-c897de3f26af 00:37:46.909 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:47.169 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:37:47.169 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:47.169 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:47.169 14:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:47.169 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0b571c1f-750b-48c0-ac6b-c897de3f26af -t 2000 00:37:47.429 [ 00:37:47.429 { 00:37:47.429 "name": "0b571c1f-750b-48c0-ac6b-c897de3f26af", 00:37:47.429 "aliases": [ 00:37:47.429 "lvs/lvol" 00:37:47.429 ], 00:37:47.429 "product_name": "Logical Volume", 00:37:47.429 "block_size": 4096, 00:37:47.429 "num_blocks": 38912, 00:37:47.429 "uuid": "0b571c1f-750b-48c0-ac6b-c897de3f26af", 00:37:47.429 "assigned_rate_limits": { 00:37:47.429 "rw_ios_per_sec": 0, 00:37:47.429 "rw_mbytes_per_sec": 0, 00:37:47.429 "r_mbytes_per_sec": 0, 00:37:47.429 "w_mbytes_per_sec": 0 00:37:47.429 }, 00:37:47.429 "claimed": false, 00:37:47.429 "zoned": false, 00:37:47.429 "supported_io_types": { 00:37:47.429 "read": true, 00:37:47.429 "write": true, 00:37:47.429 "unmap": true, 00:37:47.429 "flush": false, 00:37:47.429 "reset": true, 00:37:47.429 "nvme_admin": false, 00:37:47.429 "nvme_io": false, 00:37:47.429 "nvme_io_md": false, 00:37:47.429 "write_zeroes": true, 00:37:47.429 "zcopy": false, 00:37:47.429 "get_zone_info": false, 00:37:47.429 "zone_management": false, 00:37:47.429 "zone_append": false, 00:37:47.429 "compare": false, 00:37:47.429 "compare_and_write": false, 00:37:47.429 "abort": false, 00:37:47.429 "seek_hole": true, 00:37:47.429 "seek_data": true, 00:37:47.429 "copy": false, 00:37:47.429 "nvme_iov_md": false 00:37:47.429 }, 00:37:47.429 "driver_specific": { 00:37:47.429 "lvol": { 00:37:47.429 "lvol_store_uuid": "f9e56309-5ad9-465c-b92c-b37e65ceb142", 00:37:47.429 "base_bdev": "aio_bdev", 00:37:47.429 "thin_provision": false, 00:37:47.429 "num_allocated_clusters": 38, 00:37:47.429 "snapshot": false, 00:37:47.429 "clone": false, 00:37:47.429 "esnap_clone": false 00:37:47.429 } 00:37:47.429 } 00:37:47.429 } 00:37:47.429 ] 00:37:47.429 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:37:47.429 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9e56309-5ad9-465c-b92c-b37e65ceb142 00:37:47.429 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:37:47.429 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:37:47.429 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9e56309-5ad9-465c-b92c-b37e65ceb142 00:37:47.429 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:37:47.689 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:37:47.689 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0b571c1f-750b-48c0-ac6b-c897de3f26af 00:37:47.950 14:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f9e56309-5ad9-465c-b92c-b37e65ceb142 00:37:47.950 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:48.210 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:48.210 00:37:48.210 real 0m17.698s 00:37:48.210 user 0m35.414s 00:37:48.210 sys 0m3.263s 00:37:48.210 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:48.210 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:48.210 ************************************ 00:37:48.210 END TEST lvs_grow_dirty 00:37:48.210 ************************************ 00:37:48.210 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:37:48.210 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:37:48.210 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:37:48.210 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:37:48.210 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:37:48.210 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:37:48.210 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:37:48.210 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:37:48.210 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:37:48.210 nvmf_trace.0 00:37:48.472 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:37:48.472 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:37:48.472 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:48.472 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:37:48.472 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:48.472 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:37:48.472 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:48.472 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:48.472 rmmod nvme_tcp 00:37:48.472 rmmod nvme_fabrics 00:37:48.472 rmmod nvme_keyring 00:37:48.472 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:48.472 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:37:48.472 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:37:48.472 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3671594 ']' 00:37:48.472 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3671594 00:37:48.472 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3671594 ']' 00:37:48.472 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3671594 00:37:48.472 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:37:48.472 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:48.472 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3671594 00:37:48.472 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:48.472 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:48.472 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3671594' 00:37:48.472 killing process with pid 3671594 00:37:48.472 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3671594 00:37:48.472 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3671594 00:37:48.733 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:48.733 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:48.733 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:48.733 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:37:48.733 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:37:48.733 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:48.733 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:37:48.733 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:48.733 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:48.733 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:48.733 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:48.733 14:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:50.649 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:50.649 00:37:50.649 real 0m45.243s 00:37:50.649 user 0m54.047s 00:37:50.649 sys 0m11.008s 00:37:50.649 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:50.649 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:50.649 ************************************ 00:37:50.649 END TEST nvmf_lvs_grow 00:37:50.649 ************************************ 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:50.910 ************************************ 00:37:50.910 START TEST nvmf_bdev_io_wait 00:37:50.910 ************************************ 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:37:50.910 * Looking for test storage... 00:37:50.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:37:50.910 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:37:51.173 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:37:51.173 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:51.173 14:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:51.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.173 --rc genhtml_branch_coverage=1 00:37:51.173 --rc genhtml_function_coverage=1 00:37:51.173 --rc genhtml_legend=1 00:37:51.173 --rc geninfo_all_blocks=1 00:37:51.173 --rc geninfo_unexecuted_blocks=1 00:37:51.173 00:37:51.173 ' 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:51.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.173 --rc genhtml_branch_coverage=1 00:37:51.173 --rc genhtml_function_coverage=1 00:37:51.173 --rc genhtml_legend=1 00:37:51.173 --rc geninfo_all_blocks=1 00:37:51.173 --rc geninfo_unexecuted_blocks=1 00:37:51.173 00:37:51.173 ' 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:51.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.173 --rc genhtml_branch_coverage=1 00:37:51.173 --rc genhtml_function_coverage=1 00:37:51.173 --rc genhtml_legend=1 00:37:51.173 --rc geninfo_all_blocks=1 00:37:51.173 --rc geninfo_unexecuted_blocks=1 00:37:51.173 00:37:51.173 ' 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:51.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:51.173 --rc genhtml_branch_coverage=1 00:37:51.173 --rc genhtml_function_coverage=1 00:37:51.173 --rc genhtml_legend=1 00:37:51.173 --rc geninfo_all_blocks=1 00:37:51.173 --rc geninfo_unexecuted_blocks=1 00:37:51.173 00:37:51.173 ' 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:51.173 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:51.174 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:51.174 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:51.174 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:51.174 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:51.174 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:51.174 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:51.174 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:51.174 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:51.174 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:51.174 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:51.174 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:37:51.174 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:51.174 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:51.174 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:51.174 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:51.174 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:51.174 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:51.174 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:51.174 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:51.174 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:51.174 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:51.174 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:37:51.174 14:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:59.319 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:59.320 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:59.320 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:59.320 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:59.320 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:59.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:59.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:37:59.320 00:37:59.320 --- 10.0.0.2 ping statistics --- 00:37:59.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:59.320 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:59.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:59.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:37:59.320 00:37:59.320 --- 10.0.0.1 ping statistics --- 00:37:59.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:59.320 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3676692 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3676692 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3676692 ']' 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:59.320 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:59.321 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:59.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:59.321 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:59.321 14:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:59.321 [2024-11-25 14:36:03.653361] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:59.321 [2024-11-25 14:36:03.654468] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:37:59.321 [2024-11-25 14:36:03.654517] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:59.321 [2024-11-25 14:36:03.753622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:59.321 [2024-11-25 14:36:03.808155] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:59.321 [2024-11-25 14:36:03.808216] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:59.321 [2024-11-25 14:36:03.808224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:59.321 [2024-11-25 14:36:03.808231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:59.321 [2024-11-25 14:36:03.808238] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:59.321 [2024-11-25 14:36:03.810220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:59.321 [2024-11-25 14:36:03.810327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:59.321 [2024-11-25 14:36:03.810459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:59.321 [2024-11-25 14:36:03.810461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:59.321 [2024-11-25 14:36:03.811024] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:59.583 [2024-11-25 14:36:04.579174] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:59.583 [2024-11-25 14:36:04.580055] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:59.583 [2024-11-25 14:36:04.580124] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:59.583 [2024-11-25 14:36:04.580286] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:59.583 [2024-11-25 14:36:04.591255] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:59.583 Malloc0 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:59.583 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:59.583 [2024-11-25 14:36:04.667881] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:59.844 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.844 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3676921 00:37:59.844 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3676924 00:37:59.844 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:37:59.844 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:37:59.844 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:37:59.844 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:59.845 { 00:37:59.845 "params": { 00:37:59.845 "name": "Nvme$subsystem", 00:37:59.845 "trtype": "$TEST_TRANSPORT", 00:37:59.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:59.845 "adrfam": "ipv4", 00:37:59.845 "trsvcid": "$NVMF_PORT", 00:37:59.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:59.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:59.845 "hdgst": ${hdgst:-false}, 00:37:59.845 "ddgst": ${ddgst:-false} 00:37:59.845 }, 00:37:59.845 "method": "bdev_nvme_attach_controller" 00:37:59.845 } 00:37:59.845 EOF 00:37:59.845 )") 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3676926 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:59.845 { 00:37:59.845 "params": { 00:37:59.845 "name": "Nvme$subsystem", 00:37:59.845 "trtype": "$TEST_TRANSPORT", 00:37:59.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:59.845 "adrfam": "ipv4", 00:37:59.845 "trsvcid": "$NVMF_PORT", 00:37:59.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:59.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:59.845 "hdgst": ${hdgst:-false}, 00:37:59.845 "ddgst": ${ddgst:-false} 00:37:59.845 }, 00:37:59.845 "method": "bdev_nvme_attach_controller" 00:37:59.845 } 00:37:59.845 EOF 00:37:59.845 )") 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3676930 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:59.845 { 00:37:59.845 "params": { 00:37:59.845 "name": "Nvme$subsystem", 00:37:59.845 "trtype": "$TEST_TRANSPORT", 00:37:59.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:59.845 "adrfam": "ipv4", 00:37:59.845 "trsvcid": "$NVMF_PORT", 00:37:59.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:59.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:59.845 "hdgst": ${hdgst:-false}, 00:37:59.845 "ddgst": ${ddgst:-false} 00:37:59.845 }, 00:37:59.845 "method": "bdev_nvme_attach_controller" 00:37:59.845 } 00:37:59.845 EOF 00:37:59.845 )") 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:59.845 { 00:37:59.845 "params": { 00:37:59.845 "name": "Nvme$subsystem", 00:37:59.845 "trtype": "$TEST_TRANSPORT", 00:37:59.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:59.845 "adrfam": "ipv4", 00:37:59.845 "trsvcid": "$NVMF_PORT", 00:37:59.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:59.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:59.845 "hdgst": ${hdgst:-false}, 00:37:59.845 "ddgst": ${ddgst:-false} 00:37:59.845 }, 00:37:59.845 "method": "bdev_nvme_attach_controller" 00:37:59.845 } 00:37:59.845 EOF 00:37:59.845 )") 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3676921 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:59.845 "params": { 00:37:59.845 "name": "Nvme1", 00:37:59.845 "trtype": "tcp", 00:37:59.845 "traddr": "10.0.0.2", 00:37:59.845 "adrfam": "ipv4", 00:37:59.845 "trsvcid": "4420", 00:37:59.845 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:59.845 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:59.845 "hdgst": false, 00:37:59.845 "ddgst": false 00:37:59.845 }, 00:37:59.845 "method": "bdev_nvme_attach_controller" 00:37:59.845 }' 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:59.845 "params": { 00:37:59.845 "name": "Nvme1", 00:37:59.845 "trtype": "tcp", 00:37:59.845 "traddr": "10.0.0.2", 00:37:59.845 "adrfam": "ipv4", 00:37:59.845 "trsvcid": "4420", 00:37:59.845 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:59.845 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:59.845 "hdgst": false, 00:37:59.845 "ddgst": false 00:37:59.845 }, 00:37:59.845 "method": "bdev_nvme_attach_controller" 00:37:59.845 }' 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:59.845 "params": { 00:37:59.845 "name": "Nvme1", 00:37:59.845 "trtype": "tcp", 00:37:59.845 "traddr": "10.0.0.2", 00:37:59.845 "adrfam": "ipv4", 00:37:59.845 "trsvcid": "4420", 00:37:59.845 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:59.845 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:59.845 "hdgst": false, 00:37:59.845 "ddgst": false 00:37:59.845 }, 00:37:59.845 "method": "bdev_nvme_attach_controller" 00:37:59.845 }' 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:37:59.845 14:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:59.845 "params": { 00:37:59.845 "name": "Nvme1", 00:37:59.845 "trtype": "tcp", 00:37:59.845 "traddr": "10.0.0.2", 00:37:59.845 "adrfam": "ipv4", 00:37:59.845 "trsvcid": "4420", 00:37:59.845 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:59.845 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:59.845 "hdgst": false, 00:37:59.845 "ddgst": false 00:37:59.845 }, 00:37:59.845 "method": "bdev_nvme_attach_controller" 00:37:59.845 }' 00:37:59.845 [2024-11-25 14:36:04.726873] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:37:59.845 [2024-11-25 14:36:04.726948] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:37:59.845 [2024-11-25 14:36:04.727705] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:37:59.845 [2024-11-25 14:36:04.727778] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:37:59.845 [2024-11-25 14:36:04.729571] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:37:59.846 [2024-11-25 14:36:04.729644] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:37:59.846 [2024-11-25 14:36:04.729636] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:37:59.846 [2024-11-25 14:36:04.729695] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:38:00.107 [2024-11-25 14:36:04.953530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:00.107 [2024-11-25 14:36:04.993016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:00.107 [2024-11-25 14:36:05.018271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:00.107 [2024-11-25 14:36:05.056476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:00.107 [2024-11-25 14:36:05.113679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:00.107 [2024-11-25 14:36:05.155510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:00.107 [2024-11-25 14:36:05.180094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:00.367 [2024-11-25 14:36:05.219833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:00.368 Running I/O for 1 seconds... 00:38:00.368 Running I/O for 1 seconds... 00:38:00.368 Running I/O for 1 seconds... 00:38:00.368 Running I/O for 1 seconds... 00:38:01.311 7046.00 IOPS, 27.52 MiB/s [2024-11-25T13:36:06.401Z] 11454.00 IOPS, 44.74 MiB/s [2024-11-25T13:36:06.401Z] 182008.00 IOPS, 710.97 MiB/s 00:38:01.311 Latency(us) 00:38:01.311 [2024-11-25T13:36:06.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:01.311 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:38:01.311 Nvme1n1 : 1.00 181646.89 709.56 0.00 0.00 700.49 302.08 1966.08 00:38:01.311 [2024-11-25T13:36:06.401Z] =================================================================================================================== 00:38:01.311 [2024-11-25T13:36:06.401Z] Total : 181646.89 709.56 0.00 0.00 700.49 302.08 1966.08 00:38:01.311 00:38:01.311 Latency(us) 00:38:01.311 [2024-11-25T13:36:06.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:01.311 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:38:01.311 Nvme1n1 : 1.02 7088.67 27.69 0.00 0.00 17947.89 2170.88 26432.85 00:38:01.311 [2024-11-25T13:36:06.401Z] =================================================================================================================== 00:38:01.311 [2024-11-25T13:36:06.401Z] Total : 7088.67 27.69 0.00 0.00 17947.89 2170.88 26432.85 00:38:01.311 00:38:01.311 Latency(us) 00:38:01.311 [2024-11-25T13:36:06.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:01.311 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:38:01.311 Nvme1n1 : 1.01 11496.42 44.91 0.00 0.00 11088.97 6307.84 16930.13 00:38:01.311 [2024-11-25T13:36:06.401Z] =================================================================================================================== 00:38:01.311 [2024-11-25T13:36:06.401Z] Total : 11496.42 44.91 0.00 0.00 11088.97 6307.84 16930.13 00:38:01.311 7218.00 IOPS, 28.20 MiB/s 00:38:01.311 Latency(us) 00:38:01.311 [2024-11-25T13:36:06.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:01.311 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:38:01.311 Nvme1n1 : 1.01 7348.31 28.70 0.00 0.00 17375.63 3686.40 35170.99 00:38:01.311 [2024-11-25T13:36:06.401Z] =================================================================================================================== 00:38:01.311 [2024-11-25T13:36:06.401Z] Total : 7348.31 28.70 0.00 0.00 17375.63 3686.40 35170.99 00:38:01.573 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3676924 00:38:01.573 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3676926 00:38:01.573 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3676930 00:38:01.573 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:01.573 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.573 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:01.573 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.573 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:38:01.573 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:38:01.573 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:01.573 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:38:01.573 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:01.573 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:38:01.573 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:01.573 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:01.573 rmmod nvme_tcp 00:38:01.573 rmmod nvme_fabrics 00:38:01.573 rmmod nvme_keyring 00:38:01.573 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:01.573 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:38:01.573 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:38:01.573 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3676692 ']' 00:38:01.573 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3676692 00:38:01.573 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3676692 ']' 00:38:01.573 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3676692 00:38:01.573 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:38:01.573 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:01.573 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3676692 00:38:01.835 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:01.835 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:01.835 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3676692' 00:38:01.835 killing process with pid 3676692 00:38:01.835 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3676692 00:38:01.835 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3676692 00:38:01.835 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:01.835 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:01.835 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:01.835 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:38:01.835 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:38:01.835 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:01.835 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:38:01.835 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:01.835 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:01.835 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:01.835 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:01.835 14:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:04.383 14:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:04.383 00:38:04.383 real 0m13.107s 00:38:04.383 user 0m15.731s 00:38:04.383 sys 0m7.688s 00:38:04.383 14:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:04.383 14:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:38:04.383 ************************************ 00:38:04.383 END TEST nvmf_bdev_io_wait 00:38:04.383 ************************************ 00:38:04.383 14:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:04.383 14:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:04.383 14:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:04.383 14:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:04.383 ************************************ 00:38:04.383 START TEST nvmf_queue_depth 00:38:04.383 ************************************ 00:38:04.383 14:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:38:04.383 * Looking for test storage... 00:38:04.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:04.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:04.383 --rc genhtml_branch_coverage=1 00:38:04.383 --rc genhtml_function_coverage=1 00:38:04.383 --rc genhtml_legend=1 00:38:04.383 --rc geninfo_all_blocks=1 00:38:04.383 --rc geninfo_unexecuted_blocks=1 00:38:04.383 00:38:04.383 ' 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:04.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:04.383 --rc genhtml_branch_coverage=1 00:38:04.383 --rc genhtml_function_coverage=1 00:38:04.383 --rc genhtml_legend=1 00:38:04.383 --rc geninfo_all_blocks=1 00:38:04.383 --rc geninfo_unexecuted_blocks=1 00:38:04.383 00:38:04.383 ' 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:04.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:04.383 --rc genhtml_branch_coverage=1 00:38:04.383 --rc genhtml_function_coverage=1 00:38:04.383 --rc genhtml_legend=1 00:38:04.383 --rc geninfo_all_blocks=1 00:38:04.383 --rc geninfo_unexecuted_blocks=1 00:38:04.383 00:38:04.383 ' 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:04.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:04.383 --rc genhtml_branch_coverage=1 00:38:04.383 --rc genhtml_function_coverage=1 00:38:04.383 --rc genhtml_legend=1 00:38:04.383 --rc geninfo_all_blocks=1 00:38:04.383 --rc geninfo_unexecuted_blocks=1 00:38:04.383 00:38:04.383 ' 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:04.383 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:04.384 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:04.384 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:04.384 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:04.384 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:04.384 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:38:04.384 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:38:04.384 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:04.384 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:38:04.384 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:04.384 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:04.384 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:04.384 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:04.384 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:04.384 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:04.384 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:04.384 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:04.384 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:04.384 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:04.384 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:38:04.384 14:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:12.524 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:12.524 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:12.524 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:12.524 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:12.524 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:12.525 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:12.525 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:12.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:12.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:38:12.525 00:38:12.525 --- 10.0.0.2 ping statistics --- 00:38:12.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:12.525 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:38:12.525 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:12.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:12.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:38:12.525 00:38:12.525 --- 10.0.0.1 ping statistics --- 00:38:12.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:12.525 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:38:12.525 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:12.525 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:38:12.525 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:12.525 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:12.525 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:12.525 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:12.525 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:12.525 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:12.525 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:12.525 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:38:12.525 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:12.525 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:12.525 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:12.525 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3681872 00:38:12.525 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3681872 00:38:12.525 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:12.525 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3681872 ']' 00:38:12.525 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:12.525 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:12.525 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:12.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:12.525 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:12.525 14:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:12.525 [2024-11-25 14:36:16.871092] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:12.525 [2024-11-25 14:36:16.872256] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:38:12.525 [2024-11-25 14:36:16.872307] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:12.525 [2024-11-25 14:36:16.948837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:12.525 [2024-11-25 14:36:16.994814] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:12.525 [2024-11-25 14:36:16.994865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:12.525 [2024-11-25 14:36:16.994874] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:12.525 [2024-11-25 14:36:16.994881] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:12.525 [2024-11-25 14:36:16.994887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:12.525 [2024-11-25 14:36:16.995601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:12.525 [2024-11-25 14:36:17.066156] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:12.525 [2024-11-25 14:36:17.066436] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:12.525 [2024-11-25 14:36:17.152491] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:12.525 Malloc0 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:12.525 [2024-11-25 14:36:17.240664] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3681936 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3681936 /var/tmp/bdevperf.sock 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3681936 ']' 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:12.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:12.525 14:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:12.525 [2024-11-25 14:36:17.300332] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:38:12.525 [2024-11-25 14:36:17.300396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3681936 ] 00:38:12.525 [2024-11-25 14:36:17.393549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:12.525 [2024-11-25 14:36:17.446980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:13.098 14:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:13.098 14:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:13.098 14:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:13.098 14:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.098 14:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:13.359 NVMe0n1 00:38:13.359 14:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.359 14:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:13.359 Running I/O for 10 seconds... 00:38:15.683 8223.00 IOPS, 32.12 MiB/s [2024-11-25T13:36:21.714Z] 8723.50 IOPS, 34.08 MiB/s [2024-11-25T13:36:22.656Z] 9637.67 IOPS, 37.65 MiB/s [2024-11-25T13:36:23.599Z] 10482.75 IOPS, 40.95 MiB/s [2024-11-25T13:36:24.540Z] 11060.60 IOPS, 43.21 MiB/s [2024-11-25T13:36:25.483Z] 11454.33 IOPS, 44.74 MiB/s [2024-11-25T13:36:26.426Z] 11826.57 IOPS, 46.20 MiB/s [2024-11-25T13:36:27.810Z] 12038.12 IOPS, 47.02 MiB/s [2024-11-25T13:36:28.753Z] 12192.56 IOPS, 47.63 MiB/s [2024-11-25T13:36:28.753Z] 12363.60 IOPS, 48.30 MiB/s 00:38:23.663 Latency(us) 00:38:23.663 [2024-11-25T13:36:28.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:23.663 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:38:23.663 Verification LBA range: start 0x0 length 0x4000 00:38:23.663 NVMe0n1 : 10.06 12388.23 48.39 0.00 0.00 82341.28 21626.88 75584.85 00:38:23.663 [2024-11-25T13:36:28.753Z] =================================================================================================================== 00:38:23.663 [2024-11-25T13:36:28.753Z] Total : 12388.23 48.39 0.00 0.00 82341.28 21626.88 75584.85 00:38:23.663 { 00:38:23.663 "results": [ 00:38:23.663 { 00:38:23.663 "job": "NVMe0n1", 00:38:23.663 "core_mask": "0x1", 00:38:23.663 "workload": "verify", 00:38:23.663 "status": "finished", 00:38:23.663 "verify_range": { 00:38:23.663 "start": 0, 00:38:23.663 "length": 16384 00:38:23.663 }, 00:38:23.663 "queue_depth": 1024, 00:38:23.663 "io_size": 4096, 00:38:23.663 "runtime": 10.061651, 00:38:23.663 "iops": 12388.225351883106, 00:38:23.663 "mibps": 48.39150528079338, 00:38:23.663 "io_failed": 0, 00:38:23.663 "io_timeout": 0, 00:38:23.663 "avg_latency_us": 82341.28451754033, 00:38:23.663 "min_latency_us": 21626.88, 00:38:23.663 "max_latency_us": 75584.85333333333 00:38:23.663 } 00:38:23.663 ], 00:38:23.663 "core_count": 1 00:38:23.663 } 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3681936 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3681936 ']' 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3681936 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3681936 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3681936' 00:38:23.663 killing process with pid 3681936 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3681936 00:38:23.663 Received shutdown signal, test time was about 10.000000 seconds 00:38:23.663 00:38:23.663 Latency(us) 00:38:23.663 [2024-11-25T13:36:28.753Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:23.663 [2024-11-25T13:36:28.753Z] =================================================================================================================== 00:38:23.663 [2024-11-25T13:36:28.753Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3681936 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:23.663 rmmod nvme_tcp 00:38:23.663 rmmod nvme_fabrics 00:38:23.663 rmmod nvme_keyring 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3681872 ']' 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3681872 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3681872 ']' 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3681872 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:23.663 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3681872 00:38:23.924 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:23.924 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:23.924 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3681872' 00:38:23.924 killing process with pid 3681872 00:38:23.924 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3681872 00:38:23.924 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3681872 00:38:23.924 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:23.924 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:23.924 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:23.924 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:38:23.924 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:38:23.924 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:23.924 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:38:23.924 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:23.924 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:23.924 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:23.924 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:23.924 14:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:26.469 14:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:26.469 00:38:26.469 real 0m21.978s 00:38:26.469 user 0m24.557s 00:38:26.469 sys 0m7.496s 00:38:26.469 14:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:26.469 14:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:26.469 ************************************ 00:38:26.469 END TEST nvmf_queue_depth 00:38:26.469 ************************************ 00:38:26.469 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:26.469 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:26.469 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:26.469 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:26.469 ************************************ 00:38:26.469 START TEST nvmf_target_multipath 00:38:26.469 ************************************ 00:38:26.469 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:26.469 * Looking for test storage... 00:38:26.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:26.469 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:26.469 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:26.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:26.470 --rc genhtml_branch_coverage=1 00:38:26.470 --rc genhtml_function_coverage=1 00:38:26.470 --rc genhtml_legend=1 00:38:26.470 --rc geninfo_all_blocks=1 00:38:26.470 --rc geninfo_unexecuted_blocks=1 00:38:26.470 00:38:26.470 ' 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:26.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:26.470 --rc genhtml_branch_coverage=1 00:38:26.470 --rc genhtml_function_coverage=1 00:38:26.470 --rc genhtml_legend=1 00:38:26.470 --rc geninfo_all_blocks=1 00:38:26.470 --rc geninfo_unexecuted_blocks=1 00:38:26.470 00:38:26.470 ' 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:26.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:26.470 --rc genhtml_branch_coverage=1 00:38:26.470 --rc genhtml_function_coverage=1 00:38:26.470 --rc genhtml_legend=1 00:38:26.470 --rc geninfo_all_blocks=1 00:38:26.470 --rc geninfo_unexecuted_blocks=1 00:38:26.470 00:38:26.470 ' 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:26.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:26.470 --rc genhtml_branch_coverage=1 00:38:26.470 --rc genhtml_function_coverage=1 00:38:26.470 --rc genhtml_legend=1 00:38:26.470 --rc geninfo_all_blocks=1 00:38:26.470 --rc geninfo_unexecuted_blocks=1 00:38:26.470 00:38:26.470 ' 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:26.470 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:26.471 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:26.471 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:26.471 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:26.471 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:26.471 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:26.471 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:26.471 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:26.471 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:26.471 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:38:26.471 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:26.471 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:38:26.471 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:26.471 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:26.471 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:26.471 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:26.471 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:26.471 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:26.471 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:26.471 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:26.471 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:26.471 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:26.471 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:38:26.471 14:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:34.617 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:34.617 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:34.617 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:34.617 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:34.617 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:34.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:34.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.554 ms 00:38:34.618 00:38:34.618 --- 10.0.0.2 ping statistics --- 00:38:34.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:34.618 rtt min/avg/max/mdev = 0.554/0.554/0.554/0.000 ms 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:34.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:34.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:38:34.618 00:38:34.618 --- 10.0.0.1 ping statistics --- 00:38:34.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:34.618 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:38:34.618 only one NIC for nvmf test 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:34.618 rmmod nvme_tcp 00:38:34.618 rmmod nvme_fabrics 00:38:34.618 rmmod nvme_keyring 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:34.618 14:36:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:36.005 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:36.005 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:38:36.005 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:38:36.005 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:36.005 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:38:36.005 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:36.005 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:38:36.005 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:36.005 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:36.005 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:36.005 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:38:36.005 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:38:36.005 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:38:36.005 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:36.005 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:36.005 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:36.005 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:38:36.005 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:38:36.005 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:36.005 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:38:36.005 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:36.005 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:36.005 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:36.005 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:36.005 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:36.006 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:36.006 00:38:36.006 real 0m9.940s 00:38:36.006 user 0m2.147s 00:38:36.006 sys 0m5.743s 00:38:36.006 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:36.006 14:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:36.006 ************************************ 00:38:36.006 END TEST nvmf_target_multipath 00:38:36.006 ************************************ 00:38:36.006 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:36.006 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:36.006 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:36.006 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:36.006 ************************************ 00:38:36.006 START TEST nvmf_zcopy 00:38:36.006 ************************************ 00:38:36.006 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:36.267 * Looking for test storage... 00:38:36.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:38:36.267 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:36.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:36.268 --rc genhtml_branch_coverage=1 00:38:36.268 --rc genhtml_function_coverage=1 00:38:36.268 --rc genhtml_legend=1 00:38:36.268 --rc geninfo_all_blocks=1 00:38:36.268 --rc geninfo_unexecuted_blocks=1 00:38:36.268 00:38:36.268 ' 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:36.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:36.268 --rc genhtml_branch_coverage=1 00:38:36.268 --rc genhtml_function_coverage=1 00:38:36.268 --rc genhtml_legend=1 00:38:36.268 --rc geninfo_all_blocks=1 00:38:36.268 --rc geninfo_unexecuted_blocks=1 00:38:36.268 00:38:36.268 ' 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:36.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:36.268 --rc genhtml_branch_coverage=1 00:38:36.268 --rc genhtml_function_coverage=1 00:38:36.268 --rc genhtml_legend=1 00:38:36.268 --rc geninfo_all_blocks=1 00:38:36.268 --rc geninfo_unexecuted_blocks=1 00:38:36.268 00:38:36.268 ' 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:36.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:36.268 --rc genhtml_branch_coverage=1 00:38:36.268 --rc genhtml_function_coverage=1 00:38:36.268 --rc genhtml_legend=1 00:38:36.268 --rc geninfo_all_blocks=1 00:38:36.268 --rc geninfo_unexecuted_blocks=1 00:38:36.268 00:38:36.268 ' 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:36.268 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:36.269 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:36.269 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:38:36.269 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:36.269 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:36.269 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:36.269 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:36.269 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:36.269 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:36.269 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:36.269 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:36.269 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:36.269 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:36.269 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:38:36.269 14:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:44.433 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:44.433 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:44.433 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:44.433 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:44.433 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:44.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:44.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:38:44.434 00:38:44.434 --- 10.0.0.2 ping statistics --- 00:38:44.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:44.434 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:44.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:44.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:38:44.434 00:38:44.434 --- 10.0.0.1 ping statistics --- 00:38:44.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:44.434 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3692446 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3692446 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3692446 ']' 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:44.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:44.434 14:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:44.434 [2024-11-25 14:36:48.864812] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:44.434 [2024-11-25 14:36:48.865944] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:38:44.434 [2024-11-25 14:36:48.865998] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:44.434 [2024-11-25 14:36:48.949040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:44.434 [2024-11-25 14:36:48.999866] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:44.434 [2024-11-25 14:36:48.999914] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:44.434 [2024-11-25 14:36:48.999922] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:44.434 [2024-11-25 14:36:48.999930] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:44.434 [2024-11-25 14:36:48.999936] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:44.434 [2024-11-25 14:36:49.000674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:44.434 [2024-11-25 14:36:49.076583] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:44.434 [2024-11-25 14:36:49.076875] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:44.735 [2024-11-25 14:36:49.729572] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:44.735 [2024-11-25 14:36:49.757820] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:44.735 malloc0 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:44.735 { 00:38:44.735 "params": { 00:38:44.735 "name": "Nvme$subsystem", 00:38:44.735 "trtype": "$TEST_TRANSPORT", 00:38:44.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:44.735 "adrfam": "ipv4", 00:38:44.735 "trsvcid": "$NVMF_PORT", 00:38:44.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:44.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:44.735 "hdgst": ${hdgst:-false}, 00:38:44.735 "ddgst": ${ddgst:-false} 00:38:44.735 }, 00:38:44.735 "method": "bdev_nvme_attach_controller" 00:38:44.735 } 00:38:44.735 EOF 00:38:44.735 )") 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:38:44.735 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:38:45.062 14:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:45.062 "params": { 00:38:45.062 "name": "Nvme1", 00:38:45.062 "trtype": "tcp", 00:38:45.062 "traddr": "10.0.0.2", 00:38:45.062 "adrfam": "ipv4", 00:38:45.062 "trsvcid": "4420", 00:38:45.062 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:45.062 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:45.062 "hdgst": false, 00:38:45.062 "ddgst": false 00:38:45.062 }, 00:38:45.062 "method": "bdev_nvme_attach_controller" 00:38:45.062 }' 00:38:45.062 [2024-11-25 14:36:49.862180] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:38:45.062 [2024-11-25 14:36:49.862248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3692575 ] 00:38:45.062 [2024-11-25 14:36:49.954734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:45.062 [2024-11-25 14:36:50.008755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:45.343 Running I/O for 10 seconds... 00:38:47.235 6384.00 IOPS, 49.88 MiB/s [2024-11-25T13:36:53.270Z] 6441.50 IOPS, 50.32 MiB/s [2024-11-25T13:36:54.656Z] 6452.67 IOPS, 50.41 MiB/s [2024-11-25T13:36:55.600Z] 6466.00 IOPS, 50.52 MiB/s [2024-11-25T13:36:56.542Z] 6474.80 IOPS, 50.58 MiB/s [2024-11-25T13:36:57.484Z] 6985.83 IOPS, 54.58 MiB/s [2024-11-25T13:36:58.427Z] 7371.29 IOPS, 57.59 MiB/s [2024-11-25T13:36:59.381Z] 7657.12 IOPS, 59.82 MiB/s [2024-11-25T13:37:00.326Z] 7879.56 IOPS, 61.56 MiB/s [2024-11-25T13:37:00.326Z] 8058.10 IOPS, 62.95 MiB/s 00:38:55.236 Latency(us) 00:38:55.236 [2024-11-25T13:37:00.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:55.236 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:38:55.236 Verification LBA range: start 0x0 length 0x1000 00:38:55.236 Nvme1n1 : 10.01 8060.11 62.97 0.00 0.00 15834.51 1262.93 26978.99 00:38:55.236 [2024-11-25T13:37:00.326Z] =================================================================================================================== 00:38:55.236 [2024-11-25T13:37:00.326Z] Total : 8060.11 62.97 0.00 0.00 15834.51 1262.93 26978.99 00:38:55.497 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3694579 00:38:55.497 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:38:55.497 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:55.497 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:38:55.497 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:38:55.497 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:38:55.497 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:38:55.497 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:55.497 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:55.497 { 00:38:55.497 "params": { 00:38:55.497 "name": "Nvme$subsystem", 00:38:55.497 "trtype": "$TEST_TRANSPORT", 00:38:55.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:55.497 "adrfam": "ipv4", 00:38:55.497 "trsvcid": "$NVMF_PORT", 00:38:55.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:55.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:55.497 "hdgst": ${hdgst:-false}, 00:38:55.497 "ddgst": ${ddgst:-false} 00:38:55.497 }, 00:38:55.497 "method": "bdev_nvme_attach_controller" 00:38:55.497 } 00:38:55.497 EOF 00:38:55.497 )") 00:38:55.497 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:38:55.497 [2024-11-25 14:37:00.361084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.497 [2024-11-25 14:37:00.361116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.497 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:38:55.497 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:38:55.497 14:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:55.497 "params": { 00:38:55.497 "name": "Nvme1", 00:38:55.497 "trtype": "tcp", 00:38:55.497 "traddr": "10.0.0.2", 00:38:55.497 "adrfam": "ipv4", 00:38:55.497 "trsvcid": "4420", 00:38:55.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:55.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:55.497 "hdgst": false, 00:38:55.497 "ddgst": false 00:38:55.497 }, 00:38:55.497 "method": "bdev_nvme_attach_controller" 00:38:55.497 }' 00:38:55.497 [2024-11-25 14:37:00.373054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.497 [2024-11-25 14:37:00.373063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.497 [2024-11-25 14:37:00.385052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.497 [2024-11-25 14:37:00.385061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.497 [2024-11-25 14:37:00.397051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.497 [2024-11-25 14:37:00.397060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.497 [2024-11-25 14:37:00.402982] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:38:55.497 [2024-11-25 14:37:00.403031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3694579 ] 00:38:55.497 [2024-11-25 14:37:00.409051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.497 [2024-11-25 14:37:00.409059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.497 [2024-11-25 14:37:00.421051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.497 [2024-11-25 14:37:00.421059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.497 [2024-11-25 14:37:00.433051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.497 [2024-11-25 14:37:00.433059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.497 [2024-11-25 14:37:00.445051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.497 [2024-11-25 14:37:00.445058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.497 [2024-11-25 14:37:00.457051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.497 [2024-11-25 14:37:00.457058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.497 [2024-11-25 14:37:00.469051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.497 [2024-11-25 14:37:00.469058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.497 [2024-11-25 14:37:00.481051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.497 [2024-11-25 14:37:00.481059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.497 [2024-11-25 14:37:00.484687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:55.497 [2024-11-25 14:37:00.493053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.497 [2024-11-25 14:37:00.493062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.497 [2024-11-25 14:37:00.505052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.497 [2024-11-25 14:37:00.505061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.497 [2024-11-25 14:37:00.513919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:55.497 [2024-11-25 14:37:00.517051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.497 [2024-11-25 14:37:00.517060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.497 [2024-11-25 14:37:00.529059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.497 [2024-11-25 14:37:00.529070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.497 [2024-11-25 14:37:00.541056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.497 [2024-11-25 14:37:00.541068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.497 [2024-11-25 14:37:00.553053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.497 [2024-11-25 14:37:00.553063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.497 [2024-11-25 14:37:00.565052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.497 [2024-11-25 14:37:00.565061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.498 [2024-11-25 14:37:00.577187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.498 [2024-11-25 14:37:00.577199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.758 [2024-11-25 14:37:00.589058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.758 [2024-11-25 14:37:00.589073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.759 [2024-11-25 14:37:00.601053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.759 [2024-11-25 14:37:00.601063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.759 [2024-11-25 14:37:00.613053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.759 [2024-11-25 14:37:00.613063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.759 [2024-11-25 14:37:00.625053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.759 [2024-11-25 14:37:00.625063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.759 [2024-11-25 14:37:00.637059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.759 [2024-11-25 14:37:00.637074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.759 Running I/O for 5 seconds... 00:38:55.759 [2024-11-25 14:37:00.652865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.759 [2024-11-25 14:37:00.652882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.759 [2024-11-25 14:37:00.666025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.759 [2024-11-25 14:37:00.666042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.759 [2024-11-25 14:37:00.680663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.759 [2024-11-25 14:37:00.680680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.759 [2024-11-25 14:37:00.693538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.759 [2024-11-25 14:37:00.693554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.759 [2024-11-25 14:37:00.708179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.759 [2024-11-25 14:37:00.708196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.759 [2024-11-25 14:37:00.721210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.759 [2024-11-25 14:37:00.721226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.759 [2024-11-25 14:37:00.734306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.759 [2024-11-25 14:37:00.734320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.759 [2024-11-25 14:37:00.748321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.759 [2024-11-25 14:37:00.748337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.759 [2024-11-25 14:37:00.761326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.759 [2024-11-25 14:37:00.761341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.759 [2024-11-25 14:37:00.776347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.759 [2024-11-25 14:37:00.776362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.759 [2024-11-25 14:37:00.789282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.759 [2024-11-25 14:37:00.789298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.759 [2024-11-25 14:37:00.801648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.759 [2024-11-25 14:37:00.801663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.759 [2024-11-25 14:37:00.815996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.759 [2024-11-25 14:37:00.816011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.759 [2024-11-25 14:37:00.829235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.759 [2024-11-25 14:37:00.829250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:55.759 [2024-11-25 14:37:00.842183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:55.759 [2024-11-25 14:37:00.842198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.020 [2024-11-25 14:37:00.856388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.020 [2024-11-25 14:37:00.856404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.020 [2024-11-25 14:37:00.869591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.020 [2024-11-25 14:37:00.869605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.020 [2024-11-25 14:37:00.883999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.020 [2024-11-25 14:37:00.884015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.020 [2024-11-25 14:37:00.897251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.020 [2024-11-25 14:37:00.897266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.020 [2024-11-25 14:37:00.910195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.020 [2024-11-25 14:37:00.910210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.020 [2024-11-25 14:37:00.924848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.020 [2024-11-25 14:37:00.924863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.020 [2024-11-25 14:37:00.937786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.020 [2024-11-25 14:37:00.937800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.020 [2024-11-25 14:37:00.952787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.020 [2024-11-25 14:37:00.952802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.020 [2024-11-25 14:37:00.965771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.020 [2024-11-25 14:37:00.965786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.020 [2024-11-25 14:37:00.980372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.020 [2024-11-25 14:37:00.980387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.020 [2024-11-25 14:37:00.993600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.020 [2024-11-25 14:37:00.993616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.020 [2024-11-25 14:37:01.007855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.020 [2024-11-25 14:37:01.007870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.020 [2024-11-25 14:37:01.020922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.020 [2024-11-25 14:37:01.020938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.020 [2024-11-25 14:37:01.033468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.020 [2024-11-25 14:37:01.033483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.020 [2024-11-25 14:37:01.048440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.020 [2024-11-25 14:37:01.048455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.020 [2024-11-25 14:37:01.061621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.020 [2024-11-25 14:37:01.061636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.020 [2024-11-25 14:37:01.076514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.020 [2024-11-25 14:37:01.076529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.020 [2024-11-25 14:37:01.090013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.020 [2024-11-25 14:37:01.090028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.020 [2024-11-25 14:37:01.104091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.020 [2024-11-25 14:37:01.104107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.282 [2024-11-25 14:37:01.117147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.282 [2024-11-25 14:37:01.117167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.282 [2024-11-25 14:37:01.130242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.282 [2024-11-25 14:37:01.130257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.282 [2024-11-25 14:37:01.144243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.282 [2024-11-25 14:37:01.144258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.282 [2024-11-25 14:37:01.157194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.282 [2024-11-25 14:37:01.157209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.282 [2024-11-25 14:37:01.169835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.282 [2024-11-25 14:37:01.169850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.282 [2024-11-25 14:37:01.184465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.282 [2024-11-25 14:37:01.184480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.282 [2024-11-25 14:37:01.197565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.282 [2024-11-25 14:37:01.197580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.282 [2024-11-25 14:37:01.212311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.282 [2024-11-25 14:37:01.212327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.282 [2024-11-25 14:37:01.225430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.282 [2024-11-25 14:37:01.225445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.282 [2024-11-25 14:37:01.240359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.282 [2024-11-25 14:37:01.240375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.282 [2024-11-25 14:37:01.253427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.282 [2024-11-25 14:37:01.253442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.282 [2024-11-25 14:37:01.268327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.282 [2024-11-25 14:37:01.268343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.282 [2024-11-25 14:37:01.281256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.282 [2024-11-25 14:37:01.281278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.282 [2024-11-25 14:37:01.294060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.282 [2024-11-25 14:37:01.294075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.282 [2024-11-25 14:37:01.308792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.282 [2024-11-25 14:37:01.308807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.282 [2024-11-25 14:37:01.321757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.282 [2024-11-25 14:37:01.321771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.282 [2024-11-25 14:37:01.336453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.282 [2024-11-25 14:37:01.336468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.282 [2024-11-25 14:37:01.349587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.282 [2024-11-25 14:37:01.349602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.282 [2024-11-25 14:37:01.364466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.282 [2024-11-25 14:37:01.364481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.542 [2024-11-25 14:37:01.377633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.542 [2024-11-25 14:37:01.377649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.542 [2024-11-25 14:37:01.392254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.542 [2024-11-25 14:37:01.392269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.542 [2024-11-25 14:37:01.405364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.542 [2024-11-25 14:37:01.405378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.542 [2024-11-25 14:37:01.420212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.542 [2024-11-25 14:37:01.420228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.542 [2024-11-25 14:37:01.433201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.542 [2024-11-25 14:37:01.433217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.542 [2024-11-25 14:37:01.445984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.542 [2024-11-25 14:37:01.445999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.542 [2024-11-25 14:37:01.460405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.543 [2024-11-25 14:37:01.460420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.543 [2024-11-25 14:37:01.473403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.543 [2024-11-25 14:37:01.473418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.543 [2024-11-25 14:37:01.488539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.543 [2024-11-25 14:37:01.488554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.543 [2024-11-25 14:37:01.501661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.543 [2024-11-25 14:37:01.501676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.543 [2024-11-25 14:37:01.516693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.543 [2024-11-25 14:37:01.516709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.543 [2024-11-25 14:37:01.530057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.543 [2024-11-25 14:37:01.530072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.543 [2024-11-25 14:37:01.544382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.543 [2024-11-25 14:37:01.544401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.543 [2024-11-25 14:37:01.557568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.543 [2024-11-25 14:37:01.557583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.543 [2024-11-25 14:37:01.571889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.543 [2024-11-25 14:37:01.571905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.543 [2024-11-25 14:37:01.584885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.543 [2024-11-25 14:37:01.584900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.543 [2024-11-25 14:37:01.597828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.543 [2024-11-25 14:37:01.597844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.543 [2024-11-25 14:37:01.612434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.543 [2024-11-25 14:37:01.612450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.543 [2024-11-25 14:37:01.625434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.543 [2024-11-25 14:37:01.625449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.803 [2024-11-25 14:37:01.640067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.803 [2024-11-25 14:37:01.640083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.803 19017.00 IOPS, 148.57 MiB/s [2024-11-25T13:37:01.893Z] [2024-11-25 14:37:01.653055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.803 [2024-11-25 14:37:01.653070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.803 [2024-11-25 14:37:01.666771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.803 [2024-11-25 14:37:01.666787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.803 [2024-11-25 14:37:01.680169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.803 [2024-11-25 14:37:01.680185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.804 [2024-11-25 14:37:01.693116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.804 [2024-11-25 14:37:01.693132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.804 [2024-11-25 14:37:01.706080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.804 [2024-11-25 14:37:01.706095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.804 [2024-11-25 14:37:01.720592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.804 [2024-11-25 14:37:01.720607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.804 [2024-11-25 14:37:01.733695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.804 [2024-11-25 14:37:01.733710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.804 [2024-11-25 14:37:01.748225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.804 [2024-11-25 14:37:01.748241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.804 [2024-11-25 14:37:01.761400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.804 [2024-11-25 14:37:01.761414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.804 [2024-11-25 14:37:01.776075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.804 [2024-11-25 14:37:01.776090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.804 [2024-11-25 14:37:01.789114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.804 [2024-11-25 14:37:01.789129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.804 [2024-11-25 14:37:01.801751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.804 [2024-11-25 14:37:01.801770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.804 [2024-11-25 14:37:01.816067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.804 [2024-11-25 14:37:01.816082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.804 [2024-11-25 14:37:01.829109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.804 [2024-11-25 14:37:01.829124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.804 [2024-11-25 14:37:01.841950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.804 [2024-11-25 14:37:01.841965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.804 [2024-11-25 14:37:01.856230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.804 [2024-11-25 14:37:01.856244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.804 [2024-11-25 14:37:01.869715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.804 [2024-11-25 14:37:01.869729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:56.804 [2024-11-25 14:37:01.884476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:56.804 [2024-11-25 14:37:01.884491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.064 [2024-11-25 14:37:01.897675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.064 [2024-11-25 14:37:01.897690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.064 [2024-11-25 14:37:01.912111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.064 [2024-11-25 14:37:01.912125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.064 [2024-11-25 14:37:01.925294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.064 [2024-11-25 14:37:01.925309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.064 [2024-11-25 14:37:01.938215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.064 [2024-11-25 14:37:01.938231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.064 [2024-11-25 14:37:01.952251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.064 [2024-11-25 14:37:01.952266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.064 [2024-11-25 14:37:01.965329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.064 [2024-11-25 14:37:01.965343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.064 [2024-11-25 14:37:01.980432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.064 [2024-11-25 14:37:01.980447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.064 [2024-11-25 14:37:01.993591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.064 [2024-11-25 14:37:01.993606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.064 [2024-11-25 14:37:02.008486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.064 [2024-11-25 14:37:02.008501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.064 [2024-11-25 14:37:02.021659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.064 [2024-11-25 14:37:02.021674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.064 [2024-11-25 14:37:02.036056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.064 [2024-11-25 14:37:02.036071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.065 [2024-11-25 14:37:02.049168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.065 [2024-11-25 14:37:02.049183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.065 [2024-11-25 14:37:02.062016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.065 [2024-11-25 14:37:02.062031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.065 [2024-11-25 14:37:02.076266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.065 [2024-11-25 14:37:02.076282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.065 [2024-11-25 14:37:02.089476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.065 [2024-11-25 14:37:02.089491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.065 [2024-11-25 14:37:02.104278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.065 [2024-11-25 14:37:02.104294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.065 [2024-11-25 14:37:02.117496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.065 [2024-11-25 14:37:02.117511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.065 [2024-11-25 14:37:02.131979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.065 [2024-11-25 14:37:02.131994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.065 [2024-11-25 14:37:02.144954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.065 [2024-11-25 14:37:02.144969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.325 [2024-11-25 14:37:02.157811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.325 [2024-11-25 14:37:02.157827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.325 [2024-11-25 14:37:02.172345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.325 [2024-11-25 14:37:02.172360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.325 [2024-11-25 14:37:02.185531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.325 [2024-11-25 14:37:02.185546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.325 [2024-11-25 14:37:02.200093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.325 [2024-11-25 14:37:02.200107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.325 [2024-11-25 14:37:02.213268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.325 [2024-11-25 14:37:02.213283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.325 [2024-11-25 14:37:02.226134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.325 [2024-11-25 14:37:02.226149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.325 [2024-11-25 14:37:02.240627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.325 [2024-11-25 14:37:02.240643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.325 [2024-11-25 14:37:02.253385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.325 [2024-11-25 14:37:02.253399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.325 [2024-11-25 14:37:02.268308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.325 [2024-11-25 14:37:02.268323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.325 [2024-11-25 14:37:02.281294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.325 [2024-11-25 14:37:02.281309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.325 [2024-11-25 14:37:02.294425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.325 [2024-11-25 14:37:02.294439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.325 [2024-11-25 14:37:02.308574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.325 [2024-11-25 14:37:02.308590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.325 [2024-11-25 14:37:02.321562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.325 [2024-11-25 14:37:02.321576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.325 [2024-11-25 14:37:02.336377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.325 [2024-11-25 14:37:02.336392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.325 [2024-11-25 14:37:02.349399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.325 [2024-11-25 14:37:02.349413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.325 [2024-11-25 14:37:02.364536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.325 [2024-11-25 14:37:02.364551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.325 [2024-11-25 14:37:02.377763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.325 [2024-11-25 14:37:02.377778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.325 [2024-11-25 14:37:02.392164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.325 [2024-11-25 14:37:02.392180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.325 [2024-11-25 14:37:02.404983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.325 [2024-11-25 14:37:02.404998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.586 [2024-11-25 14:37:02.418411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.586 [2024-11-25 14:37:02.418425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.586 [2024-11-25 14:37:02.432354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.586 [2024-11-25 14:37:02.432370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.586 [2024-11-25 14:37:02.445370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.586 [2024-11-25 14:37:02.445384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.586 [2024-11-25 14:37:02.460409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.586 [2024-11-25 14:37:02.460424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.586 [2024-11-25 14:37:02.473640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.586 [2024-11-25 14:37:02.473655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.586 [2024-11-25 14:37:02.488344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.586 [2024-11-25 14:37:02.488359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.586 [2024-11-25 14:37:02.501682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.586 [2024-11-25 14:37:02.501696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.586 [2024-11-25 14:37:02.516340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.586 [2024-11-25 14:37:02.516355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.586 [2024-11-25 14:37:02.529451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.586 [2024-11-25 14:37:02.529465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.586 [2024-11-25 14:37:02.544247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.586 [2024-11-25 14:37:02.544263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.586 [2024-11-25 14:37:02.557456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.586 [2024-11-25 14:37:02.557471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.586 [2024-11-25 14:37:02.572008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.586 [2024-11-25 14:37:02.572024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.586 [2024-11-25 14:37:02.585064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.586 [2024-11-25 14:37:02.585079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.586 [2024-11-25 14:37:02.598006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.586 [2024-11-25 14:37:02.598022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.586 [2024-11-25 14:37:02.611987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.586 [2024-11-25 14:37:02.612003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.586 [2024-11-25 14:37:02.625045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.586 [2024-11-25 14:37:02.625060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.586 [2024-11-25 14:37:02.638050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.586 [2024-11-25 14:37:02.638065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.586 [2024-11-25 14:37:02.652471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.586 [2024-11-25 14:37:02.652487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.586 19051.00 IOPS, 148.84 MiB/s [2024-11-25T13:37:02.676Z] [2024-11-25 14:37:02.665269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.586 [2024-11-25 14:37:02.665284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.848 [2024-11-25 14:37:02.677937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.848 [2024-11-25 14:37:02.677952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.848 [2024-11-25 14:37:02.692357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.848 [2024-11-25 14:37:02.692373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.848 [2024-11-25 14:37:02.705327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.848 [2024-11-25 14:37:02.705342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.848 [2024-11-25 14:37:02.718117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.848 [2024-11-25 14:37:02.718131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.848 [2024-11-25 14:37:02.732381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.848 [2024-11-25 14:37:02.732397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.848 [2024-11-25 14:37:02.745466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.848 [2024-11-25 14:37:02.745480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.848 [2024-11-25 14:37:02.760050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.848 [2024-11-25 14:37:02.760066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.848 [2024-11-25 14:37:02.773276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.848 [2024-11-25 14:37:02.773291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.848 [2024-11-25 14:37:02.785995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.848 [2024-11-25 14:37:02.786010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.848 [2024-11-25 14:37:02.800019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.848 [2024-11-25 14:37:02.800035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.848 [2024-11-25 14:37:02.813102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.848 [2024-11-25 14:37:02.813117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.848 [2024-11-25 14:37:02.825874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.848 [2024-11-25 14:37:02.825893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.848 [2024-11-25 14:37:02.840166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.848 [2024-11-25 14:37:02.840181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.848 [2024-11-25 14:37:02.853228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.848 [2024-11-25 14:37:02.853244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.848 [2024-11-25 14:37:02.866012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.848 [2024-11-25 14:37:02.866027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.848 [2024-11-25 14:37:02.880571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.848 [2024-11-25 14:37:02.880587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.848 [2024-11-25 14:37:02.893380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.848 [2024-11-25 14:37:02.893395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.848 [2024-11-25 14:37:02.908227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.848 [2024-11-25 14:37:02.908243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.848 [2024-11-25 14:37:02.921364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.848 [2024-11-25 14:37:02.921379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:57.848 [2024-11-25 14:37:02.936089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:57.848 [2024-11-25 14:37:02.936104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.109 [2024-11-25 14:37:02.948970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.109 [2024-11-25 14:37:02.948986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.109 [2024-11-25 14:37:02.962182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.109 [2024-11-25 14:37:02.962197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.109 [2024-11-25 14:37:02.976276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.109 [2024-11-25 14:37:02.976291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.109 [2024-11-25 14:37:02.989388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.109 [2024-11-25 14:37:02.989403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.109 [2024-11-25 14:37:03.004283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.109 [2024-11-25 14:37:03.004299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.109 [2024-11-25 14:37:03.017422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.109 [2024-11-25 14:37:03.017437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.109 [2024-11-25 14:37:03.032221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.109 [2024-11-25 14:37:03.032237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.109 [2024-11-25 14:37:03.045130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.109 [2024-11-25 14:37:03.045146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.109 [2024-11-25 14:37:03.058083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.109 [2024-11-25 14:37:03.058098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.109 [2024-11-25 14:37:03.071951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.109 [2024-11-25 14:37:03.071966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.109 [2024-11-25 14:37:03.085015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.109 [2024-11-25 14:37:03.085034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.109 [2024-11-25 14:37:03.097908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.109 [2024-11-25 14:37:03.097923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.109 [2024-11-25 14:37:03.112592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.109 [2024-11-25 14:37:03.112608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.109 [2024-11-25 14:37:03.125682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.109 [2024-11-25 14:37:03.125697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.109 [2024-11-25 14:37:03.140250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.109 [2024-11-25 14:37:03.140265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.109 [2024-11-25 14:37:03.153211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.109 [2024-11-25 14:37:03.153227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.109 [2024-11-25 14:37:03.165979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.109 [2024-11-25 14:37:03.165994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.109 [2024-11-25 14:37:03.180271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.109 [2024-11-25 14:37:03.180286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.109 [2024-11-25 14:37:03.193271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.109 [2024-11-25 14:37:03.193286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.370 [2024-11-25 14:37:03.206571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.370 [2024-11-25 14:37:03.206587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.370 [2024-11-25 14:37:03.220507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.370 [2024-11-25 14:37:03.220523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.370 [2024-11-25 14:37:03.233365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.370 [2024-11-25 14:37:03.233380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.370 [2024-11-25 14:37:03.248319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.370 [2024-11-25 14:37:03.248335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.370 [2024-11-25 14:37:03.261467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.370 [2024-11-25 14:37:03.261482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.370 [2024-11-25 14:37:03.276371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.370 [2024-11-25 14:37:03.276387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.370 [2024-11-25 14:37:03.289240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.370 [2024-11-25 14:37:03.289255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.370 [2024-11-25 14:37:03.301894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.370 [2024-11-25 14:37:03.301909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.370 [2024-11-25 14:37:03.316170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.370 [2024-11-25 14:37:03.316186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.370 [2024-11-25 14:37:03.329476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.370 [2024-11-25 14:37:03.329491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.370 [2024-11-25 14:37:03.342219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.370 [2024-11-25 14:37:03.342238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.370 [2024-11-25 14:37:03.356566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.370 [2024-11-25 14:37:03.356581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.370 [2024-11-25 14:37:03.369403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.370 [2024-11-25 14:37:03.369418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.370 [2024-11-25 14:37:03.384130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.370 [2024-11-25 14:37:03.384146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.370 [2024-11-25 14:37:03.397111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.370 [2024-11-25 14:37:03.397127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.370 [2024-11-25 14:37:03.409836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.370 [2024-11-25 14:37:03.409852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.370 [2024-11-25 14:37:03.424228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.370 [2024-11-25 14:37:03.424243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.370 [2024-11-25 14:37:03.437311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.370 [2024-11-25 14:37:03.437327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.370 [2024-11-25 14:37:03.449860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.370 [2024-11-25 14:37:03.449875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.631 [2024-11-25 14:37:03.464186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.631 [2024-11-25 14:37:03.464202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.631 [2024-11-25 14:37:03.477368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.631 [2024-11-25 14:37:03.477384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.631 [2024-11-25 14:37:03.492226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.631 [2024-11-25 14:37:03.492241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.631 [2024-11-25 14:37:03.505106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.631 [2024-11-25 14:37:03.505121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.631 [2024-11-25 14:37:03.517697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.631 [2024-11-25 14:37:03.517711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.631 [2024-11-25 14:37:03.532077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.631 [2024-11-25 14:37:03.532092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.631 [2024-11-25 14:37:03.545166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.631 [2024-11-25 14:37:03.545180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.631 [2024-11-25 14:37:03.557939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.631 [2024-11-25 14:37:03.557955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.631 [2024-11-25 14:37:03.572280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.631 [2024-11-25 14:37:03.572295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.631 [2024-11-25 14:37:03.585253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.631 [2024-11-25 14:37:03.585268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.631 [2024-11-25 14:37:03.598054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.631 [2024-11-25 14:37:03.598073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.631 [2024-11-25 14:37:03.612506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.631 [2024-11-25 14:37:03.612520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.631 [2024-11-25 14:37:03.625617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.631 [2024-11-25 14:37:03.625632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.631 [2024-11-25 14:37:03.640177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.631 [2024-11-25 14:37:03.640192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.631 [2024-11-25 14:37:03.653042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.631 [2024-11-25 14:37:03.653058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.631 19079.00 IOPS, 149.05 MiB/s [2024-11-25T13:37:03.721Z] [2024-11-25 14:37:03.665933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.631 [2024-11-25 14:37:03.665948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.631 [2024-11-25 14:37:03.680883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.631 [2024-11-25 14:37:03.680898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.631 [2024-11-25 14:37:03.694179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.631 [2024-11-25 14:37:03.694194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.631 [2024-11-25 14:37:03.708267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.631 [2024-11-25 14:37:03.708282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.892 [2024-11-25 14:37:03.721319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.892 [2024-11-25 14:37:03.721335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.892 [2024-11-25 14:37:03.733987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.892 [2024-11-25 14:37:03.734002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.892 [2024-11-25 14:37:03.748045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.892 [2024-11-25 14:37:03.748060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.892 [2024-11-25 14:37:03.760837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.892 [2024-11-25 14:37:03.760852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.892 [2024-11-25 14:37:03.774181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.892 [2024-11-25 14:37:03.774196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.892 [2024-11-25 14:37:03.788144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.892 [2024-11-25 14:37:03.788165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.892 [2024-11-25 14:37:03.801277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.892 [2024-11-25 14:37:03.801293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.892 [2024-11-25 14:37:03.813798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.892 [2024-11-25 14:37:03.813813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.892 [2024-11-25 14:37:03.828271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.892 [2024-11-25 14:37:03.828286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.892 [2024-11-25 14:37:03.841303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.892 [2024-11-25 14:37:03.841318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.892 [2024-11-25 14:37:03.853976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.892 [2024-11-25 14:37:03.853991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.892 [2024-11-25 14:37:03.868321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.892 [2024-11-25 14:37:03.868336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.892 [2024-11-25 14:37:03.881137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.892 [2024-11-25 14:37:03.881153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.892 [2024-11-25 14:37:03.894610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.892 [2024-11-25 14:37:03.894625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.892 [2024-11-25 14:37:03.908707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.892 [2024-11-25 14:37:03.908722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.892 [2024-11-25 14:37:03.921493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.892 [2024-11-25 14:37:03.921508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.892 [2024-11-25 14:37:03.936371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.892 [2024-11-25 14:37:03.936386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.892 [2024-11-25 14:37:03.949357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.892 [2024-11-25 14:37:03.949372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.893 [2024-11-25 14:37:03.964208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.893 [2024-11-25 14:37:03.964223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:58.893 [2024-11-25 14:37:03.976987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:58.893 [2024-11-25 14:37:03.977002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.153 [2024-11-25 14:37:03.989581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.153 [2024-11-25 14:37:03.989596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.153 [2024-11-25 14:37:04.004639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.153 [2024-11-25 14:37:04.004654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.153 [2024-11-25 14:37:04.017611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.153 [2024-11-25 14:37:04.017626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.153 [2024-11-25 14:37:04.032268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.153 [2024-11-25 14:37:04.032283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.153 [2024-11-25 14:37:04.045511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.153 [2024-11-25 14:37:04.045525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.153 [2024-11-25 14:37:04.060101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.153 [2024-11-25 14:37:04.060116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.153 [2024-11-25 14:37:04.073038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.153 [2024-11-25 14:37:04.073053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.153 [2024-11-25 14:37:04.086171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.153 [2024-11-25 14:37:04.086186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.153 [2024-11-25 14:37:04.100753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.153 [2024-11-25 14:37:04.100768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.153 [2024-11-25 14:37:04.113592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.153 [2024-11-25 14:37:04.113606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.153 [2024-11-25 14:37:04.128462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.153 [2024-11-25 14:37:04.128477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.153 [2024-11-25 14:37:04.141506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.153 [2024-11-25 14:37:04.141521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.153 [2024-11-25 14:37:04.156150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.153 [2024-11-25 14:37:04.156168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.153 [2024-11-25 14:37:04.168819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.153 [2024-11-25 14:37:04.168833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.153 [2024-11-25 14:37:04.181451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.153 [2024-11-25 14:37:04.181466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.153 [2024-11-25 14:37:04.196241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.153 [2024-11-25 14:37:04.196256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.153 [2024-11-25 14:37:04.209083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.153 [2024-11-25 14:37:04.209099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.153 [2024-11-25 14:37:04.222271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.153 [2024-11-25 14:37:04.222285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.153 [2024-11-25 14:37:04.236201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.153 [2024-11-25 14:37:04.236216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.415 [2024-11-25 14:37:04.249449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.415 [2024-11-25 14:37:04.249464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.415 [2024-11-25 14:37:04.264496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.415 [2024-11-25 14:37:04.264512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.415 [2024-11-25 14:37:04.277563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.415 [2024-11-25 14:37:04.277577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.415 [2024-11-25 14:37:04.292530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.415 [2024-11-25 14:37:04.292545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.415 [2024-11-25 14:37:04.305383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.415 [2024-11-25 14:37:04.305398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.415 [2024-11-25 14:37:04.320186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.415 [2024-11-25 14:37:04.320200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.415 [2024-11-25 14:37:04.333273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.415 [2024-11-25 14:37:04.333288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.415 [2024-11-25 14:37:04.345971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.415 [2024-11-25 14:37:04.345985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.415 [2024-11-25 14:37:04.360339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.415 [2024-11-25 14:37:04.360358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.415 [2024-11-25 14:37:04.373508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.415 [2024-11-25 14:37:04.373522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.415 [2024-11-25 14:37:04.388104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.415 [2024-11-25 14:37:04.388119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.415 [2024-11-25 14:37:04.401185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.415 [2024-11-25 14:37:04.401200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.415 [2024-11-25 14:37:04.414912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.415 [2024-11-25 14:37:04.414926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.415 [2024-11-25 14:37:04.428522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.415 [2024-11-25 14:37:04.428537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.415 [2024-11-25 14:37:04.441611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.415 [2024-11-25 14:37:04.441626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.415 [2024-11-25 14:37:04.456249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.415 [2024-11-25 14:37:04.456264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.415 [2024-11-25 14:37:04.469304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.415 [2024-11-25 14:37:04.469318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.415 [2024-11-25 14:37:04.482064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.415 [2024-11-25 14:37:04.482079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.415 [2024-11-25 14:37:04.496792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.415 [2024-11-25 14:37:04.496807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.676 [2024-11-25 14:37:04.509843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.676 [2024-11-25 14:37:04.509859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.676 [2024-11-25 14:37:04.524775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.676 [2024-11-25 14:37:04.524790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.676 [2024-11-25 14:37:04.537629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.676 [2024-11-25 14:37:04.537644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.676 [2024-11-25 14:37:04.552343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.676 [2024-11-25 14:37:04.552358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.676 [2024-11-25 14:37:04.564970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.676 [2024-11-25 14:37:04.564987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.676 [2024-11-25 14:37:04.577793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.676 [2024-11-25 14:37:04.577809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.676 [2024-11-25 14:37:04.592349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.676 [2024-11-25 14:37:04.592366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.676 [2024-11-25 14:37:04.605691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.676 [2024-11-25 14:37:04.605706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.676 [2024-11-25 14:37:04.620585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.676 [2024-11-25 14:37:04.620605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.677 [2024-11-25 14:37:04.633197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.677 [2024-11-25 14:37:04.633212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.677 [2024-11-25 14:37:04.646178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.677 [2024-11-25 14:37:04.646194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.677 19097.25 IOPS, 149.20 MiB/s [2024-11-25T13:37:04.767Z] [2024-11-25 14:37:04.660125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.677 [2024-11-25 14:37:04.660141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.677 [2024-11-25 14:37:04.673065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.677 [2024-11-25 14:37:04.673080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.677 [2024-11-25 14:37:04.686447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.677 [2024-11-25 14:37:04.686462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.677 [2024-11-25 14:37:04.700222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.677 [2024-11-25 14:37:04.700237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.677 [2024-11-25 14:37:04.712786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.677 [2024-11-25 14:37:04.712802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.677 [2024-11-25 14:37:04.725746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.677 [2024-11-25 14:37:04.725760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.677 [2024-11-25 14:37:04.740075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.677 [2024-11-25 14:37:04.740091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.677 [2024-11-25 14:37:04.752574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.677 [2024-11-25 14:37:04.752589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.938 [2024-11-25 14:37:04.766043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.938 [2024-11-25 14:37:04.766059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.938 [2024-11-25 14:37:04.780909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.938 [2024-11-25 14:37:04.780926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.938 [2024-11-25 14:37:04.793842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.938 [2024-11-25 14:37:04.793857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.938 [2024-11-25 14:37:04.808513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.938 [2024-11-25 14:37:04.808530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.938 [2024-11-25 14:37:04.821490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.938 [2024-11-25 14:37:04.821506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.938 [2024-11-25 14:37:04.836399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.939 [2024-11-25 14:37:04.836414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.939 [2024-11-25 14:37:04.849284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.939 [2024-11-25 14:37:04.849300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.939 [2024-11-25 14:37:04.862271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.939 [2024-11-25 14:37:04.862287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.939 [2024-11-25 14:37:04.876519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.939 [2024-11-25 14:37:04.876539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.939 [2024-11-25 14:37:04.889709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.939 [2024-11-25 14:37:04.889724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.939 [2024-11-25 14:37:04.904239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.939 [2024-11-25 14:37:04.904255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.939 [2024-11-25 14:37:04.916966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.939 [2024-11-25 14:37:04.916982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.939 [2024-11-25 14:37:04.929818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.939 [2024-11-25 14:37:04.929833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.939 [2024-11-25 14:37:04.944393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.939 [2024-11-25 14:37:04.944410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.939 [2024-11-25 14:37:04.957132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.939 [2024-11-25 14:37:04.957147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.939 [2024-11-25 14:37:04.970144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.939 [2024-11-25 14:37:04.970163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.939 [2024-11-25 14:37:04.984299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.939 [2024-11-25 14:37:04.984315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.939 [2024-11-25 14:37:04.997338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.939 [2024-11-25 14:37:04.997353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.939 [2024-11-25 14:37:05.010381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.939 [2024-11-25 14:37:05.010397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:59.939 [2024-11-25 14:37:05.024047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:59.939 [2024-11-25 14:37:05.024063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.201 [2024-11-25 14:37:05.037223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.201 [2024-11-25 14:37:05.037240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.201 [2024-11-25 14:37:05.050137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.201 [2024-11-25 14:37:05.050152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.201 [2024-11-25 14:37:05.064382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.201 [2024-11-25 14:37:05.064397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.201 [2024-11-25 14:37:05.077430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.201 [2024-11-25 14:37:05.077445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.201 [2024-11-25 14:37:05.092358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.201 [2024-11-25 14:37:05.092374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.201 [2024-11-25 14:37:05.105465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.201 [2024-11-25 14:37:05.105480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.201 [2024-11-25 14:37:05.120541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.201 [2024-11-25 14:37:05.120557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.201 [2024-11-25 14:37:05.133575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.201 [2024-11-25 14:37:05.133590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.201 [2024-11-25 14:37:05.148298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.201 [2024-11-25 14:37:05.148313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.201 [2024-11-25 14:37:05.161245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.201 [2024-11-25 14:37:05.161260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.201 [2024-11-25 14:37:05.174215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.201 [2024-11-25 14:37:05.174230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.201 [2024-11-25 14:37:05.188110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.201 [2024-11-25 14:37:05.188125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.201 [2024-11-25 14:37:05.200867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.201 [2024-11-25 14:37:05.200882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.201 [2024-11-25 14:37:05.213761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.201 [2024-11-25 14:37:05.213776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.201 [2024-11-25 14:37:05.227998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.201 [2024-11-25 14:37:05.228013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.201 [2024-11-25 14:37:05.240901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.201 [2024-11-25 14:37:05.240917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.201 [2024-11-25 14:37:05.253240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.201 [2024-11-25 14:37:05.253255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.201 [2024-11-25 14:37:05.265484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.201 [2024-11-25 14:37:05.265498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.201 [2024-11-25 14:37:05.279751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.201 [2024-11-25 14:37:05.279767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.462 [2024-11-25 14:37:05.292385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.462 [2024-11-25 14:37:05.292400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.462 [2024-11-25 14:37:05.305104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.462 [2024-11-25 14:37:05.305119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.462 [2024-11-25 14:37:05.317687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.462 [2024-11-25 14:37:05.317701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.462 [2024-11-25 14:37:05.332068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.462 [2024-11-25 14:37:05.332082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.462 [2024-11-25 14:37:05.344967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.462 [2024-11-25 14:37:05.344982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.462 [2024-11-25 14:37:05.357596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.462 [2024-11-25 14:37:05.357611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.462 [2024-11-25 14:37:05.371966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.462 [2024-11-25 14:37:05.371981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.462 [2024-11-25 14:37:05.385235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.462 [2024-11-25 14:37:05.385250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.462 [2024-11-25 14:37:05.398185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.462 [2024-11-25 14:37:05.398199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.462 [2024-11-25 14:37:05.411883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.462 [2024-11-25 14:37:05.411898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.462 [2024-11-25 14:37:05.424665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.462 [2024-11-25 14:37:05.424680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.462 [2024-11-25 14:37:05.437414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.462 [2024-11-25 14:37:05.437428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.462 [2024-11-25 14:37:05.452234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.462 [2024-11-25 14:37:05.452249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.462 [2024-11-25 14:37:05.465354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.462 [2024-11-25 14:37:05.465369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.462 [2024-11-25 14:37:05.479608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.462 [2024-11-25 14:37:05.479623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.462 [2024-11-25 14:37:05.492362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.462 [2024-11-25 14:37:05.492378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.462 [2024-11-25 14:37:05.504945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.462 [2024-11-25 14:37:05.504960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.462 [2024-11-25 14:37:05.517953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.462 [2024-11-25 14:37:05.517967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.462 [2024-11-25 14:37:05.531956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.462 [2024-11-25 14:37:05.531971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.462 [2024-11-25 14:37:05.545014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.462 [2024-11-25 14:37:05.545029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.723 [2024-11-25 14:37:05.557717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.723 [2024-11-25 14:37:05.557732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.723 [2024-11-25 14:37:05.572656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.723 [2024-11-25 14:37:05.572671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.723 [2024-11-25 14:37:05.585777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.723 [2024-11-25 14:37:05.585792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.723 [2024-11-25 14:37:05.600234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.723 [2024-11-25 14:37:05.600249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.723 [2024-11-25 14:37:05.613208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.723 [2024-11-25 14:37:05.613223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.723 [2024-11-25 14:37:05.626164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.723 [2024-11-25 14:37:05.626180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.723 [2024-11-25 14:37:05.640223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.723 [2024-11-25 14:37:05.640238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.723 [2024-11-25 14:37:05.652764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.723 [2024-11-25 14:37:05.652780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.723 19128.20 IOPS, 149.44 MiB/s [2024-11-25T13:37:05.813Z] [2024-11-25 14:37:05.664994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.723 [2024-11-25 14:37:05.665010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.723 00:39:00.723 Latency(us) 00:39:00.723 [2024-11-25T13:37:05.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:00.723 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:39:00.723 Nvme1n1 : 5.01 19129.70 149.45 0.00 0.00 6684.87 2839.89 11304.96 00:39:00.723 [2024-11-25T13:37:05.813Z] =================================================================================================================== 00:39:00.723 [2024-11-25T13:37:05.813Z] Total : 19129.70 149.45 0.00 0.00 6684.87 2839.89 11304.96 00:39:00.723 [2024-11-25 14:37:05.673055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.723 [2024-11-25 14:37:05.673068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.723 [2024-11-25 14:37:05.685060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.723 [2024-11-25 14:37:05.685073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.723 [2024-11-25 14:37:05.697059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.723 [2024-11-25 14:37:05.697071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.723 [2024-11-25 14:37:05.709060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.723 [2024-11-25 14:37:05.709072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.723 [2024-11-25 14:37:05.721056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.723 [2024-11-25 14:37:05.721066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.723 [2024-11-25 14:37:05.733052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.723 [2024-11-25 14:37:05.733060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.723 [2024-11-25 14:37:05.745054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.723 [2024-11-25 14:37:05.745063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.723 [2024-11-25 14:37:05.757055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.723 [2024-11-25 14:37:05.757066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.723 [2024-11-25 14:37:05.769052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:00.723 [2024-11-25 14:37:05.769061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:00.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3694579) - No such process 00:39:00.723 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3694579 00:39:00.723 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:00.723 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.723 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:00.723 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.723 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:00.723 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.723 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:00.723 delay0 00:39:00.723 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.723 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:39:00.723 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:00.723 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:00.723 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:00.723 14:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:39:00.983 [2024-11-25 14:37:05.935529] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:07.567 Initializing NVMe Controllers 00:39:07.567 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:07.567 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:07.567 Initialization complete. Launching workers. 00:39:07.567 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 3325 00:39:07.567 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 3610, failed to submit 35 00:39:07.567 success 3445, unsuccessful 165, failed 0 00:39:07.567 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:39:07.567 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:39:07.567 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:07.567 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:39:07.568 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:07.568 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:39:07.568 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:07.568 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:07.568 rmmod nvme_tcp 00:39:07.568 rmmod nvme_fabrics 00:39:07.568 rmmod nvme_keyring 00:39:07.568 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:07.568 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:39:07.568 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:39:07.568 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3692446 ']' 00:39:07.568 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3692446 00:39:07.568 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3692446 ']' 00:39:07.568 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3692446 00:39:07.568 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:39:07.568 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:07.568 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3692446 00:39:07.568 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:07.568 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:07.568 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3692446' 00:39:07.568 killing process with pid 3692446 00:39:07.568 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3692446 00:39:07.568 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3692446 00:39:07.829 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:07.829 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:07.829 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:07.829 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:39:07.829 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:39:07.829 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:07.829 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:39:07.829 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:07.829 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:07.829 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:07.829 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:07.829 14:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:09.744 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:09.744 00:39:09.744 real 0m33.655s 00:39:09.744 user 0m42.409s 00:39:09.744 sys 0m12.506s 00:39:09.744 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:09.744 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:09.744 ************************************ 00:39:09.744 END TEST nvmf_zcopy 00:39:09.744 ************************************ 00:39:09.744 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:09.744 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:09.744 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:09.744 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:09.744 ************************************ 00:39:09.744 START TEST nvmf_nmic 00:39:09.744 ************************************ 00:39:09.744 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:10.005 * Looking for test storage... 00:39:10.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:10.005 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:10.005 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:39:10.005 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:10.005 14:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:10.005 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:10.005 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:10.005 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:10.005 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:39:10.005 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:39:10.005 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:39:10.005 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:39:10.005 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:39:10.005 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:39:10.005 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:39:10.005 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:10.005 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:39:10.005 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:39:10.005 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:10.005 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:10.005 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:39:10.005 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:39:10.005 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:10.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.006 --rc genhtml_branch_coverage=1 00:39:10.006 --rc genhtml_function_coverage=1 00:39:10.006 --rc genhtml_legend=1 00:39:10.006 --rc geninfo_all_blocks=1 00:39:10.006 --rc geninfo_unexecuted_blocks=1 00:39:10.006 00:39:10.006 ' 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:10.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.006 --rc genhtml_branch_coverage=1 00:39:10.006 --rc genhtml_function_coverage=1 00:39:10.006 --rc genhtml_legend=1 00:39:10.006 --rc geninfo_all_blocks=1 00:39:10.006 --rc geninfo_unexecuted_blocks=1 00:39:10.006 00:39:10.006 ' 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:10.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.006 --rc genhtml_branch_coverage=1 00:39:10.006 --rc genhtml_function_coverage=1 00:39:10.006 --rc genhtml_legend=1 00:39:10.006 --rc geninfo_all_blocks=1 00:39:10.006 --rc geninfo_unexecuted_blocks=1 00:39:10.006 00:39:10.006 ' 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:10.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:10.006 --rc genhtml_branch_coverage=1 00:39:10.006 --rc genhtml_function_coverage=1 00:39:10.006 --rc genhtml_legend=1 00:39:10.006 --rc geninfo_all_blocks=1 00:39:10.006 --rc geninfo_unexecuted_blocks=1 00:39:10.006 00:39:10.006 ' 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:39:10.006 14:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:18.149 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:18.149 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:18.149 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:18.149 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:18.149 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:18.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:18.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:39:18.150 00:39:18.150 --- 10.0.0.2 ping statistics --- 00:39:18.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:18.150 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:18.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:18.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:39:18.150 00:39:18.150 --- 10.0.0.1 ping statistics --- 00:39:18.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:18.150 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3700927 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3700927 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3700927 ']' 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:18.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:18.150 14:37:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:18.150 [2024-11-25 14:37:22.636237] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:18.150 [2024-11-25 14:37:22.637372] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:39:18.150 [2024-11-25 14:37:22.637425] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:18.150 [2024-11-25 14:37:22.737795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:18.150 [2024-11-25 14:37:22.792420] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:18.150 [2024-11-25 14:37:22.792471] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:18.150 [2024-11-25 14:37:22.792480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:18.150 [2024-11-25 14:37:22.792487] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:18.150 [2024-11-25 14:37:22.792494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:18.150 [2024-11-25 14:37:22.794805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:18.150 [2024-11-25 14:37:22.794964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:18.150 [2024-11-25 14:37:22.795124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:18.150 [2024-11-25 14:37:22.795125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:18.150 [2024-11-25 14:37:22.872671] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:18.150 [2024-11-25 14:37:22.873840] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:18.150 [2024-11-25 14:37:22.873960] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:18.150 [2024-11-25 14:37:22.874319] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:18.150 [2024-11-25 14:37:22.874384] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:18.411 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:18.411 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:39:18.411 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:18.411 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:18.411 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:18.672 [2024-11-25 14:37:23.508237] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:18.672 Malloc0 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:18.672 [2024-11-25 14:37:23.600390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:39:18.672 test case1: single bdev can't be used in multiple subsystems 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:18.672 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.673 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:18.673 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.673 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:39:18.673 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:39:18.673 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.673 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:18.673 [2024-11-25 14:37:23.635838] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:39:18.673 [2024-11-25 14:37:23.635863] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:39:18.673 [2024-11-25 14:37:23.635872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:18.673 request: 00:39:18.673 { 00:39:18.673 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:39:18.673 "namespace": { 00:39:18.673 "bdev_name": "Malloc0", 00:39:18.673 "no_auto_visible": false 00:39:18.673 }, 00:39:18.673 "method": "nvmf_subsystem_add_ns", 00:39:18.673 "req_id": 1 00:39:18.673 } 00:39:18.673 Got JSON-RPC error response 00:39:18.673 response: 00:39:18.673 { 00:39:18.673 "code": -32602, 00:39:18.673 "message": "Invalid parameters" 00:39:18.673 } 00:39:18.673 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:18.673 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:39:18.673 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:39:18.673 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:39:18.673 Adding namespace failed - expected result. 00:39:18.673 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:39:18.673 test case2: host connect to nvmf target in multiple paths 00:39:18.673 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:18.673 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:18.673 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:18.673 [2024-11-25 14:37:23.648001] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:18.673 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:18.673 14:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:19.243 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:39:19.503 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:39:19.503 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:39:19.503 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:19.503 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:39:19.504 14:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:39:21.414 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:21.414 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:21.414 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:21.414 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:39:21.414 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:21.414 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:39:21.414 14:37:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:21.414 [global] 00:39:21.414 thread=1 00:39:21.414 invalidate=1 00:39:21.414 rw=write 00:39:21.414 time_based=1 00:39:21.414 runtime=1 00:39:21.414 ioengine=libaio 00:39:21.414 direct=1 00:39:21.414 bs=4096 00:39:21.414 iodepth=1 00:39:21.414 norandommap=0 00:39:21.414 numjobs=1 00:39:21.414 00:39:21.414 verify_dump=1 00:39:21.414 verify_backlog=512 00:39:21.414 verify_state_save=0 00:39:21.414 do_verify=1 00:39:21.414 verify=crc32c-intel 00:39:21.414 [job0] 00:39:21.414 filename=/dev/nvme0n1 00:39:21.702 Could not set queue depth (nvme0n1) 00:39:21.962 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:21.962 fio-3.35 00:39:21.962 Starting 1 thread 00:39:22.895 00:39:22.895 job0: (groupid=0, jobs=1): err= 0: pid=3701970: Mon Nov 25 14:37:27 2024 00:39:22.895 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:39:22.895 slat (nsec): min=6716, max=60779, avg=25693.13, stdev=3758.33 00:39:22.895 clat (usec): min=684, max=1190, avg=1012.36, stdev=81.67 00:39:22.895 lat (usec): min=710, max=1215, avg=1038.06, stdev=82.30 00:39:22.895 clat percentiles (usec): 00:39:22.895 | 1.00th=[ 775], 5.00th=[ 857], 10.00th=[ 898], 20.00th=[ 947], 00:39:22.895 | 30.00th=[ 988], 40.00th=[ 1012], 50.00th=[ 1029], 60.00th=[ 1045], 00:39:22.895 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1090], 95.00th=[ 1123], 00:39:22.895 | 99.00th=[ 1172], 99.50th=[ 1172], 99.90th=[ 1188], 99.95th=[ 1188], 00:39:22.895 | 99.99th=[ 1188] 00:39:22.895 write: IOPS=753, BW=3013KiB/s (3085kB/s)(3016KiB/1001msec); 0 zone resets 00:39:22.895 slat (nsec): min=9518, max=54778, avg=28437.25, stdev=9969.08 00:39:22.895 clat (usec): min=254, max=824, avg=580.40, stdev=97.17 00:39:22.895 lat (usec): min=263, max=843, avg=608.83, stdev=101.44 00:39:22.895 clat percentiles (usec): 00:39:22.895 | 1.00th=[ 334], 5.00th=[ 400], 10.00th=[ 449], 20.00th=[ 498], 00:39:22.895 | 30.00th=[ 545], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 603], 00:39:22.895 | 70.00th=[ 635], 80.00th=[ 668], 90.00th=[ 693], 95.00th=[ 725], 00:39:22.895 | 99.00th=[ 766], 99.50th=[ 791], 99.90th=[ 824], 99.95th=[ 824], 00:39:22.895 | 99.99th=[ 824] 00:39:22.895 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:39:22.895 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:22.895 lat (usec) : 500=12.56%, 750=46.37%, 1000=14.93% 00:39:22.895 lat (msec) : 2=26.15% 00:39:22.895 cpu : usr=1.80%, sys=3.70%, ctx=1266, majf=0, minf=1 00:39:22.895 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:22.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:22.895 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:22.895 issued rwts: total=512,754,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:22.895 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:22.895 00:39:22.895 Run status group 0 (all jobs): 00:39:22.895 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:39:22.895 WRITE: bw=3013KiB/s (3085kB/s), 3013KiB/s-3013KiB/s (3085kB/s-3085kB/s), io=3016KiB (3088kB), run=1001-1001msec 00:39:22.895 00:39:22.895 Disk stats (read/write): 00:39:22.895 nvme0n1: ios=562/591, merge=0/0, ticks=573/327, in_queue=900, util=93.79% 00:39:23.153 14:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:23.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:39:23.153 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:23.153 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:39:23.153 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:23.153 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:23.153 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:23.153 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:23.153 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:39:23.153 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:23.153 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:39:23.153 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:23.153 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:39:23.153 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:23.153 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:39:23.153 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:23.153 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:23.153 rmmod nvme_tcp 00:39:23.153 rmmod nvme_fabrics 00:39:23.153 rmmod nvme_keyring 00:39:23.153 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:23.412 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:39:23.412 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:39:23.412 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3700927 ']' 00:39:23.412 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3700927 00:39:23.412 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3700927 ']' 00:39:23.412 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3700927 00:39:23.412 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:39:23.412 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:23.412 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3700927 00:39:23.412 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:23.412 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:23.412 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3700927' 00:39:23.412 killing process with pid 3700927 00:39:23.412 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3700927 00:39:23.412 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3700927 00:39:23.412 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:23.412 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:23.412 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:23.412 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:39:23.412 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:39:23.412 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:39:23.412 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:23.412 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:23.412 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:23.412 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:23.412 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:23.412 14:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:25.958 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:25.958 00:39:25.958 real 0m15.702s 00:39:25.958 user 0m35.708s 00:39:25.958 sys 0m7.354s 00:39:25.958 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:25.958 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:25.958 ************************************ 00:39:25.958 END TEST nvmf_nmic 00:39:25.958 ************************************ 00:39:25.958 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:25.958 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:25.958 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:25.958 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:25.958 ************************************ 00:39:25.958 START TEST nvmf_fio_target 00:39:25.958 ************************************ 00:39:25.958 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:25.958 * Looking for test storage... 00:39:25.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:25.958 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:25.958 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:39:25.958 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:25.958 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:25.958 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:25.958 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:25.958 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:25.958 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:39:25.958 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:39:25.958 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:39:25.958 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:39:25.958 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:39:25.958 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:39:25.958 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:39:25.958 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:25.958 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:39:25.958 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:39:25.958 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:25.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:25.959 --rc genhtml_branch_coverage=1 00:39:25.959 --rc genhtml_function_coverage=1 00:39:25.959 --rc genhtml_legend=1 00:39:25.959 --rc geninfo_all_blocks=1 00:39:25.959 --rc geninfo_unexecuted_blocks=1 00:39:25.959 00:39:25.959 ' 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:25.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:25.959 --rc genhtml_branch_coverage=1 00:39:25.959 --rc genhtml_function_coverage=1 00:39:25.959 --rc genhtml_legend=1 00:39:25.959 --rc geninfo_all_blocks=1 00:39:25.959 --rc geninfo_unexecuted_blocks=1 00:39:25.959 00:39:25.959 ' 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:25.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:25.959 --rc genhtml_branch_coverage=1 00:39:25.959 --rc genhtml_function_coverage=1 00:39:25.959 --rc genhtml_legend=1 00:39:25.959 --rc geninfo_all_blocks=1 00:39:25.959 --rc geninfo_unexecuted_blocks=1 00:39:25.959 00:39:25.959 ' 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:25.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:25.959 --rc genhtml_branch_coverage=1 00:39:25.959 --rc genhtml_function_coverage=1 00:39:25.959 --rc genhtml_legend=1 00:39:25.959 --rc geninfo_all_blocks=1 00:39:25.959 --rc geninfo_unexecuted_blocks=1 00:39:25.959 00:39:25.959 ' 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:25.959 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:25.960 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:25.960 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:39:25.960 14:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:34.105 14:37:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:34.105 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:34.105 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:34.105 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:34.105 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:34.105 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:34.105 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:34.105 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:34.105 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:34.105 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:34.105 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:34.105 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:34.105 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:34.105 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:34.105 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:34.105 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:34.105 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:34.105 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:34.105 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:34.105 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:34.105 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:34.105 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:34.105 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:34.105 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:34.105 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:34.105 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:34.106 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:34.106 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:34.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:34.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:39:34.106 00:39:34.106 --- 10.0.0.2 ping statistics --- 00:39:34.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:34.106 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:34.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:34.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:39:34.106 00:39:34.106 --- 10.0.0.1 ping statistics --- 00:39:34.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:34.106 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3706457 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3706457 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3706457 ']' 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:34.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:34.106 14:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:34.106 [2024-11-25 14:37:38.412547] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:34.106 [2024-11-25 14:37:38.413689] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:39:34.106 [2024-11-25 14:37:38.413740] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:34.106 [2024-11-25 14:37:38.516512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:34.106 [2024-11-25 14:37:38.571233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:34.106 [2024-11-25 14:37:38.571291] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:34.106 [2024-11-25 14:37:38.571299] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:34.106 [2024-11-25 14:37:38.571306] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:34.106 [2024-11-25 14:37:38.571313] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:34.106 [2024-11-25 14:37:38.573750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:34.106 [2024-11-25 14:37:38.573930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:34.106 [2024-11-25 14:37:38.574133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:34.106 [2024-11-25 14:37:38.574133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:34.106 [2024-11-25 14:37:38.651550] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:34.106 [2024-11-25 14:37:38.652631] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:34.106 [2024-11-25 14:37:38.652787] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:34.106 [2024-11-25 14:37:38.653293] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:34.106 [2024-11-25 14:37:38.653336] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:34.367 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:34.367 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:39:34.367 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:34.367 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:34.367 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:34.367 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:34.367 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:34.367 [2024-11-25 14:37:39.451025] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:34.628 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:34.888 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:39:34.888 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:34.888 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:39:34.888 14:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:35.149 14:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:39:35.149 14:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:35.410 14:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:39:35.410 14:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:39:35.672 14:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:35.672 14:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:39:35.672 14:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:35.932 14:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:39:35.932 14:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:36.193 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:39:36.193 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:39:36.454 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:36.454 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:36.454 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:36.713 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:36.713 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:36.972 14:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:36.972 [2024-11-25 14:37:41.970915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:36.972 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:39:37.232 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:39:37.491 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:37.751 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:39:37.751 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:39:37.751 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:37.751 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:39:37.752 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:39:37.752 14:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:39:40.370 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:40.370 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:40.370 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:40.370 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:39:40.370 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:40.370 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:39:40.370 14:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:40.370 [global] 00:39:40.370 thread=1 00:39:40.370 invalidate=1 00:39:40.370 rw=write 00:39:40.370 time_based=1 00:39:40.370 runtime=1 00:39:40.370 ioengine=libaio 00:39:40.370 direct=1 00:39:40.370 bs=4096 00:39:40.370 iodepth=1 00:39:40.370 norandommap=0 00:39:40.370 numjobs=1 00:39:40.370 00:39:40.370 verify_dump=1 00:39:40.370 verify_backlog=512 00:39:40.370 verify_state_save=0 00:39:40.370 do_verify=1 00:39:40.370 verify=crc32c-intel 00:39:40.370 [job0] 00:39:40.370 filename=/dev/nvme0n1 00:39:40.370 [job1] 00:39:40.370 filename=/dev/nvme0n2 00:39:40.370 [job2] 00:39:40.370 filename=/dev/nvme0n3 00:39:40.370 [job3] 00:39:40.370 filename=/dev/nvme0n4 00:39:40.370 Could not set queue depth (nvme0n1) 00:39:40.370 Could not set queue depth (nvme0n2) 00:39:40.370 Could not set queue depth (nvme0n3) 00:39:40.370 Could not set queue depth (nvme0n4) 00:39:40.370 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:40.370 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:40.370 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:40.370 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:40.370 fio-3.35 00:39:40.370 Starting 4 threads 00:39:41.807 00:39:41.807 job0: (groupid=0, jobs=1): err= 0: pid=3707930: Mon Nov 25 14:37:46 2024 00:39:41.807 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:39:41.807 slat (nsec): min=25396, max=45253, avg=26709.74, stdev=3173.94 00:39:41.807 clat (usec): min=664, max=1356, avg=1061.33, stdev=109.38 00:39:41.807 lat (usec): min=690, max=1382, avg=1088.04, stdev=109.17 00:39:41.807 clat percentiles (usec): 00:39:41.807 | 1.00th=[ 783], 5.00th=[ 848], 10.00th=[ 914], 20.00th=[ 979], 00:39:41.807 | 30.00th=[ 1020], 40.00th=[ 1045], 50.00th=[ 1074], 60.00th=[ 1090], 00:39:41.807 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1237], 00:39:41.807 | 99.00th=[ 1287], 99.50th=[ 1303], 99.90th=[ 1352], 99.95th=[ 1352], 00:39:41.807 | 99.99th=[ 1352] 00:39:41.807 write: IOPS=647, BW=2589KiB/s (2652kB/s)(2592KiB/1001msec); 0 zone resets 00:39:41.807 slat (nsec): min=9945, max=65433, avg=31147.38, stdev=9756.07 00:39:41.807 clat (usec): min=250, max=1103, avg=638.13, stdev=130.21 00:39:41.807 lat (usec): min=264, max=1138, avg=669.28, stdev=133.69 00:39:41.807 clat percentiles (usec): 00:39:41.807 | 1.00th=[ 343], 5.00th=[ 429], 10.00th=[ 478], 20.00th=[ 523], 00:39:41.807 | 30.00th=[ 578], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 668], 00:39:41.807 | 70.00th=[ 701], 80.00th=[ 750], 90.00th=[ 807], 95.00th=[ 865], 00:39:41.807 | 99.00th=[ 963], 99.50th=[ 979], 99.90th=[ 1106], 99.95th=[ 1106], 00:39:41.807 | 99.99th=[ 1106] 00:39:41.807 bw ( KiB/s): min= 4096, max= 4096, per=36.17%, avg=4096.00, stdev= 0.00, samples=1 00:39:41.807 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:41.807 lat (usec) : 500=8.10%, 750=37.41%, 1000=21.29% 00:39:41.807 lat (msec) : 2=33.19% 00:39:41.807 cpu : usr=1.90%, sys=3.30%, ctx=1162, majf=0, minf=1 00:39:41.807 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:41.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.807 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.807 issued rwts: total=512,648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:41.807 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:41.807 job1: (groupid=0, jobs=1): err= 0: pid=3707942: Mon Nov 25 14:37:46 2024 00:39:41.807 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:39:41.807 slat (nsec): min=9780, max=46374, avg=26401.94, stdev=3031.71 00:39:41.807 clat (usec): min=683, max=1619, avg=1040.44, stdev=112.02 00:39:41.807 lat (usec): min=695, max=1647, avg=1066.84, stdev=112.04 00:39:41.807 clat percentiles (usec): 00:39:41.807 | 1.00th=[ 783], 5.00th=[ 832], 10.00th=[ 881], 20.00th=[ 955], 00:39:41.807 | 30.00th=[ 996], 40.00th=[ 1020], 50.00th=[ 1045], 60.00th=[ 1074], 00:39:41.807 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1172], 95.00th=[ 1205], 00:39:41.807 | 99.00th=[ 1303], 99.50th=[ 1336], 99.90th=[ 1614], 99.95th=[ 1614], 00:39:41.807 | 99.99th=[ 1614] 00:39:41.807 write: IOPS=677, BW=2709KiB/s (2774kB/s)(2712KiB/1001msec); 0 zone resets 00:39:41.807 slat (nsec): min=9800, max=53606, avg=31434.38, stdev=9221.86 00:39:41.807 clat (usec): min=261, max=995, avg=621.64, stdev=123.83 00:39:41.807 lat (usec): min=274, max=1030, avg=653.08, stdev=127.21 00:39:41.807 clat percentiles (usec): 00:39:41.807 | 1.00th=[ 330], 5.00th=[ 396], 10.00th=[ 457], 20.00th=[ 519], 00:39:41.807 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 660], 00:39:41.807 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 783], 95.00th=[ 824], 00:39:41.807 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 996], 99.95th=[ 996], 00:39:41.807 | 99.99th=[ 996] 00:39:41.807 bw ( KiB/s): min= 4096, max= 4096, per=36.17%, avg=4096.00, stdev= 0.00, samples=1 00:39:41.807 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:41.807 lat (usec) : 500=9.92%, 750=39.16%, 1000=21.43% 00:39:41.807 lat (msec) : 2=29.50% 00:39:41.807 cpu : usr=2.10%, sys=3.60%, ctx=1193, majf=0, minf=1 00:39:41.807 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:41.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.807 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.807 issued rwts: total=512,678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:41.808 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:41.808 job2: (groupid=0, jobs=1): err= 0: pid=3707962: Mon Nov 25 14:37:46 2024 00:39:41.808 read: IOPS=503, BW=2014KiB/s (2062kB/s)(2036KiB/1011msec) 00:39:41.808 slat (nsec): min=23369, max=61399, avg=27327.80, stdev=2407.88 00:39:41.808 clat (usec): min=799, max=42127, avg=1276.38, stdev=3140.41 00:39:41.808 lat (usec): min=826, max=42152, avg=1303.71, stdev=3140.19 00:39:41.808 clat percentiles (usec): 00:39:41.808 | 1.00th=[ 832], 5.00th=[ 889], 10.00th=[ 938], 20.00th=[ 979], 00:39:41.808 | 30.00th=[ 1004], 40.00th=[ 1012], 50.00th=[ 1029], 60.00th=[ 1057], 00:39:41.808 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1188], 00:39:41.808 | 99.00th=[ 1319], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:39:41.808 | 99.99th=[42206] 00:39:41.808 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:39:41.808 slat (nsec): min=9390, max=66059, avg=29826.07, stdev=10291.41 00:39:41.808 clat (usec): min=235, max=1028, avg=631.78, stdev=119.48 00:39:41.808 lat (usec): min=268, max=1079, avg=661.61, stdev=124.24 00:39:41.808 clat percentiles (usec): 00:39:41.808 | 1.00th=[ 351], 5.00th=[ 420], 10.00th=[ 474], 20.00th=[ 545], 00:39:41.808 | 30.00th=[ 578], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 660], 00:39:41.808 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 775], 95.00th=[ 816], 00:39:41.808 | 99.00th=[ 881], 99.50th=[ 922], 99.90th=[ 1029], 99.95th=[ 1029], 00:39:41.808 | 99.99th=[ 1029] 00:39:41.808 bw ( KiB/s): min= 4096, max= 4096, per=36.17%, avg=4096.00, stdev= 0.00, samples=1 00:39:41.808 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:41.808 lat (usec) : 250=0.10%, 500=7.05%, 750=35.55%, 1000=21.94% 00:39:41.808 lat (msec) : 2=35.06%, 50=0.29% 00:39:41.808 cpu : usr=1.58%, sys=4.36%, ctx=1022, majf=0, minf=2 00:39:41.808 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:41.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.808 issued rwts: total=509,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:41.808 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:41.808 job3: (groupid=0, jobs=1): err= 0: pid=3707968: Mon Nov 25 14:37:46 2024 00:39:41.808 read: IOPS=616, BW=2466KiB/s (2525kB/s)(2468KiB/1001msec) 00:39:41.808 slat (nsec): min=7278, max=44723, avg=25177.21, stdev=6017.28 00:39:41.808 clat (usec): min=292, max=1027, avg=739.40, stdev=131.81 00:39:41.808 lat (usec): min=318, max=1049, avg=764.58, stdev=131.90 00:39:41.808 clat percentiles (usec): 00:39:41.808 | 1.00th=[ 453], 5.00th=[ 562], 10.00th=[ 594], 20.00th=[ 619], 00:39:41.808 | 30.00th=[ 635], 40.00th=[ 668], 50.00th=[ 717], 60.00th=[ 807], 00:39:41.808 | 70.00th=[ 848], 80.00th=[ 873], 90.00th=[ 898], 95.00th=[ 930], 00:39:41.808 | 99.00th=[ 988], 99.50th=[ 1012], 99.90th=[ 1029], 99.95th=[ 1029], 00:39:41.808 | 99.99th=[ 1029] 00:39:41.808 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:39:41.808 slat (nsec): min=9940, max=74382, avg=32019.40, stdev=7248.99 00:39:41.808 clat (usec): min=142, max=783, avg=471.79, stdev=114.89 00:39:41.808 lat (usec): min=152, max=795, avg=503.81, stdev=115.02 00:39:41.808 clat percentiles (usec): 00:39:41.808 | 1.00th=[ 260], 5.00th=[ 326], 10.00th=[ 359], 20.00th=[ 379], 00:39:41.808 | 30.00th=[ 392], 40.00th=[ 408], 50.00th=[ 424], 60.00th=[ 474], 00:39:41.808 | 70.00th=[ 570], 80.00th=[ 603], 90.00th=[ 635], 95.00th=[ 660], 00:39:41.808 | 99.00th=[ 709], 99.50th=[ 725], 99.90th=[ 758], 99.95th=[ 783], 00:39:41.808 | 99.99th=[ 783] 00:39:41.808 bw ( KiB/s): min= 4096, max= 4096, per=36.17%, avg=4096.00, stdev= 0.00, samples=1 00:39:41.808 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:41.808 lat (usec) : 250=0.37%, 500=38.76%, 750=42.84%, 1000=17.79% 00:39:41.808 lat (msec) : 2=0.24% 00:39:41.808 cpu : usr=2.00%, sys=5.40%, ctx=1642, majf=0, minf=2 00:39:41.808 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:41.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.808 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:41.808 issued rwts: total=617,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:41.808 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:41.808 00:39:41.808 Run status group 0 (all jobs): 00:39:41.808 READ: bw=8506KiB/s (8711kB/s), 2014KiB/s-2466KiB/s (2062kB/s-2525kB/s), io=8600KiB (8806kB), run=1001-1011msec 00:39:41.808 WRITE: bw=11.1MiB/s (11.6MB/s), 2026KiB/s-4092KiB/s (2074kB/s-4190kB/s), io=11.2MiB (11.7MB), run=1001-1011msec 00:39:41.808 00:39:41.808 Disk stats (read/write): 00:39:41.808 nvme0n1: ios=461/512, merge=0/0, ticks=1409/315, in_queue=1724, util=96.39% 00:39:41.808 nvme0n2: ios=475/512, merge=0/0, ticks=1404/306, in_queue=1710, util=96.83% 00:39:41.808 nvme0n3: ios=460/512, merge=0/0, ticks=438/254, in_queue=692, util=88.25% 00:39:41.808 nvme0n4: ios=512/855, merge=0/0, ticks=366/391, in_queue=757, util=89.50% 00:39:41.808 14:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:39:41.808 [global] 00:39:41.808 thread=1 00:39:41.808 invalidate=1 00:39:41.808 rw=randwrite 00:39:41.808 time_based=1 00:39:41.808 runtime=1 00:39:41.808 ioengine=libaio 00:39:41.808 direct=1 00:39:41.808 bs=4096 00:39:41.808 iodepth=1 00:39:41.808 norandommap=0 00:39:41.808 numjobs=1 00:39:41.808 00:39:41.808 verify_dump=1 00:39:41.808 verify_backlog=512 00:39:41.808 verify_state_save=0 00:39:41.808 do_verify=1 00:39:41.808 verify=crc32c-intel 00:39:41.808 [job0] 00:39:41.808 filename=/dev/nvme0n1 00:39:41.808 [job1] 00:39:41.808 filename=/dev/nvme0n2 00:39:41.808 [job2] 00:39:41.808 filename=/dev/nvme0n3 00:39:41.808 [job3] 00:39:41.808 filename=/dev/nvme0n4 00:39:41.808 Could not set queue depth (nvme0n1) 00:39:41.808 Could not set queue depth (nvme0n2) 00:39:41.808 Could not set queue depth (nvme0n3) 00:39:41.808 Could not set queue depth (nvme0n4) 00:39:42.069 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:42.069 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:42.069 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:42.069 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:42.069 fio-3.35 00:39:42.069 Starting 4 threads 00:39:43.453 00:39:43.453 job0: (groupid=0, jobs=1): err= 0: pid=3708375: Mon Nov 25 14:37:48 2024 00:39:43.453 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:39:43.453 slat (nsec): min=7035, max=59725, avg=28046.28, stdev=2578.11 00:39:43.453 clat (usec): min=714, max=1379, avg=1105.01, stdev=89.86 00:39:43.453 lat (usec): min=743, max=1407, avg=1133.06, stdev=89.60 00:39:43.453 clat percentiles (usec): 00:39:43.453 | 1.00th=[ 840], 5.00th=[ 955], 10.00th=[ 988], 20.00th=[ 1045], 00:39:43.453 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1123], 00:39:43.453 | 70.00th=[ 1156], 80.00th=[ 1172], 90.00th=[ 1205], 95.00th=[ 1221], 00:39:43.453 | 99.00th=[ 1287], 99.50th=[ 1319], 99.90th=[ 1385], 99.95th=[ 1385], 00:39:43.453 | 99.99th=[ 1385] 00:39:43.453 write: IOPS=611, BW=2446KiB/s (2504kB/s)(2448KiB/1001msec); 0 zone resets 00:39:43.453 slat (nsec): min=9297, max=60061, avg=31958.45, stdev=9623.59 00:39:43.453 clat (usec): min=283, max=1000, avg=639.32, stdev=113.79 00:39:43.453 lat (usec): min=294, max=1035, avg=671.27, stdev=117.57 00:39:43.453 clat percentiles (usec): 00:39:43.453 | 1.00th=[ 367], 5.00th=[ 449], 10.00th=[ 486], 20.00th=[ 545], 00:39:43.453 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 644], 60.00th=[ 676], 00:39:43.453 | 70.00th=[ 701], 80.00th=[ 742], 90.00th=[ 775], 95.00th=[ 816], 00:39:43.453 | 99.00th=[ 889], 99.50th=[ 922], 99.90th=[ 1004], 99.95th=[ 1004], 00:39:43.453 | 99.99th=[ 1004] 00:39:43.453 bw ( KiB/s): min= 4087, max= 4087, per=39.85%, avg=4087.00, stdev= 0.00, samples=1 00:39:43.453 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:39:43.453 lat (usec) : 500=6.76%, 750=37.90%, 1000=15.66% 00:39:43.453 lat (msec) : 2=39.68% 00:39:43.453 cpu : usr=3.10%, sys=3.90%, ctx=1127, majf=0, minf=1 00:39:43.453 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:43.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:43.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:43.453 issued rwts: total=512,612,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:43.453 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:43.453 job1: (groupid=0, jobs=1): err= 0: pid=3708387: Mon Nov 25 14:37:48 2024 00:39:43.453 read: IOPS=16, BW=67.5KiB/s (69.1kB/s)(68.0KiB/1008msec) 00:39:43.453 slat (nsec): min=25801, max=43383, avg=27398.94, stdev=4318.32 00:39:43.453 clat (usec): min=1329, max=42030, avg=39546.21, stdev=9849.12 00:39:43.453 lat (usec): min=1355, max=42059, avg=39573.61, stdev=9849.39 00:39:43.453 clat percentiles (usec): 00:39:43.453 | 1.00th=[ 1336], 5.00th=[ 1336], 10.00th=[41681], 20.00th=[41681], 00:39:43.453 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:39:43.453 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:43.453 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:43.453 | 99.99th=[42206] 00:39:43.453 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:39:43.453 slat (nsec): min=8968, max=50209, avg=28400.37, stdev=9142.32 00:39:43.453 clat (usec): min=343, max=2262, avg=617.58, stdev=146.09 00:39:43.453 lat (usec): min=362, max=2295, avg=645.98, stdev=149.40 00:39:43.453 clat percentiles (usec): 00:39:43.453 | 1.00th=[ 355], 5.00th=[ 383], 10.00th=[ 445], 20.00th=[ 498], 00:39:43.453 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 652], 00:39:43.453 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 758], 95.00th=[ 783], 00:39:43.453 | 99.00th=[ 832], 99.50th=[ 873], 99.90th=[ 2278], 99.95th=[ 2278], 00:39:43.453 | 99.99th=[ 2278] 00:39:43.453 bw ( KiB/s): min= 4087, max= 4087, per=39.85%, avg=4087.00, stdev= 0.00, samples=1 00:39:43.453 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:39:43.453 lat (usec) : 500=20.04%, 750=64.08%, 1000=12.29% 00:39:43.453 lat (msec) : 2=0.38%, 4=0.19%, 50=3.02% 00:39:43.453 cpu : usr=0.79%, sys=2.09%, ctx=529, majf=0, minf=1 00:39:43.453 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:43.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:43.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:43.453 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:43.453 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:43.453 job2: (groupid=0, jobs=1): err= 0: pid=3708404: Mon Nov 25 14:37:48 2024 00:39:43.454 read: IOPS=16, BW=66.5KiB/s (68.1kB/s)(68.0KiB/1023msec) 00:39:43.454 slat (nsec): min=26257, max=26954, avg=26673.82, stdev=211.61 00:39:43.454 clat (usec): min=1185, max=42031, avg=39430.97, stdev=9859.15 00:39:43.454 lat (usec): min=1212, max=42058, avg=39457.64, stdev=9859.10 00:39:43.454 clat percentiles (usec): 00:39:43.454 | 1.00th=[ 1188], 5.00th=[ 1188], 10.00th=[41157], 20.00th=[41681], 00:39:43.454 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:39:43.454 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:43.454 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:43.454 | 99.99th=[42206] 00:39:43.454 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:39:43.454 slat (nsec): min=8980, max=63579, avg=29553.18, stdev=9045.98 00:39:43.454 clat (usec): min=149, max=1029, avg=651.11, stdev=132.03 00:39:43.454 lat (usec): min=159, max=1062, avg=680.66, stdev=135.24 00:39:43.454 clat percentiles (usec): 00:39:43.454 | 1.00th=[ 289], 5.00th=[ 392], 10.00th=[ 474], 20.00th=[ 562], 00:39:43.454 | 30.00th=[ 603], 40.00th=[ 635], 50.00th=[ 660], 60.00th=[ 693], 00:39:43.454 | 70.00th=[ 725], 80.00th=[ 758], 90.00th=[ 799], 95.00th=[ 840], 00:39:43.454 | 99.00th=[ 938], 99.50th=[ 979], 99.90th=[ 1029], 99.95th=[ 1029], 00:39:43.454 | 99.99th=[ 1029] 00:39:43.454 bw ( KiB/s): min= 4087, max= 4087, per=39.85%, avg=4087.00, stdev= 0.00, samples=1 00:39:43.454 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:39:43.454 lat (usec) : 250=0.57%, 500=12.48%, 750=62.95%, 1000=20.60% 00:39:43.454 lat (msec) : 2=0.38%, 50=3.02% 00:39:43.454 cpu : usr=0.68%, sys=2.25%, ctx=529, majf=0, minf=1 00:39:43.454 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:43.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:43.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:43.454 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:43.454 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:43.454 job3: (groupid=0, jobs=1): err= 0: pid=3708410: Mon Nov 25 14:37:48 2024 00:39:43.454 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:39:43.454 slat (nsec): min=6400, max=43507, avg=16797.96, stdev=9606.57 00:39:43.454 clat (usec): min=575, max=1284, avg=904.00, stdev=126.07 00:39:43.454 lat (usec): min=582, max=1311, avg=920.79, stdev=133.92 00:39:43.454 clat percentiles (usec): 00:39:43.454 | 1.00th=[ 619], 5.00th=[ 734], 10.00th=[ 766], 20.00th=[ 799], 00:39:43.454 | 30.00th=[ 816], 40.00th=[ 840], 50.00th=[ 873], 60.00th=[ 938], 00:39:43.454 | 70.00th=[ 996], 80.00th=[ 1037], 90.00th=[ 1090], 95.00th=[ 1106], 00:39:43.454 | 99.00th=[ 1139], 99.50th=[ 1188], 99.90th=[ 1287], 99.95th=[ 1287], 00:39:43.454 | 99.99th=[ 1287] 00:39:43.454 write: IOPS=986, BW=3944KiB/s (4039kB/s)(3948KiB/1001msec); 0 zone resets 00:39:43.454 slat (nsec): min=5459, max=68774, avg=16144.32, stdev=11130.60 00:39:43.454 clat (usec): min=165, max=1436, avg=513.09, stdev=119.93 00:39:43.454 lat (usec): min=190, max=1467, avg=529.24, stdev=125.07 00:39:43.454 clat percentiles (usec): 00:39:43.454 | 1.00th=[ 235], 5.00th=[ 343], 10.00th=[ 383], 20.00th=[ 445], 00:39:43.454 | 30.00th=[ 465], 40.00th=[ 482], 50.00th=[ 494], 60.00th=[ 515], 00:39:43.454 | 70.00th=[ 537], 80.00th=[ 586], 90.00th=[ 676], 95.00th=[ 750], 00:39:43.454 | 99.00th=[ 848], 99.50th=[ 881], 99.90th=[ 1434], 99.95th=[ 1434], 00:39:43.454 | 99.99th=[ 1434] 00:39:43.454 bw ( KiB/s): min= 4087, max= 4087, per=39.85%, avg=4087.00, stdev= 0.00, samples=1 00:39:43.454 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:39:43.454 lat (usec) : 250=1.13%, 500=33.69%, 750=30.22%, 1000=25.15% 00:39:43.454 lat (msec) : 2=9.81% 00:39:43.454 cpu : usr=2.00%, sys=2.80%, ctx=1500, majf=0, minf=1 00:39:43.454 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:43.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:43.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:43.454 issued rwts: total=512,987,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:43.454 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:43.454 00:39:43.454 Run status group 0 (all jobs): 00:39:43.454 READ: bw=4137KiB/s (4236kB/s), 66.5KiB/s-2046KiB/s (68.1kB/s-2095kB/s), io=4232KiB (4334kB), run=1001-1023msec 00:39:43.454 WRITE: bw=10.0MiB/s (10.5MB/s), 2002KiB/s-3944KiB/s (2050kB/s-4039kB/s), io=10.2MiB (10.7MB), run=1001-1023msec 00:39:43.454 00:39:43.454 Disk stats (read/write): 00:39:43.454 nvme0n1: ios=467/512, merge=0/0, ticks=1292/271, in_queue=1563, util=99.00% 00:39:43.454 nvme0n2: ios=52/512, merge=0/0, ticks=551/250, in_queue=801, util=88.63% 00:39:43.454 nvme0n3: ios=39/512, merge=0/0, ticks=907/265, in_queue=1172, util=96.88% 00:39:43.454 nvme0n4: ios=512/674, merge=0/0, ticks=448/330, in_queue=778, util=89.70% 00:39:43.454 14:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:39:43.454 [global] 00:39:43.454 thread=1 00:39:43.454 invalidate=1 00:39:43.454 rw=write 00:39:43.454 time_based=1 00:39:43.454 runtime=1 00:39:43.454 ioengine=libaio 00:39:43.454 direct=1 00:39:43.454 bs=4096 00:39:43.454 iodepth=128 00:39:43.454 norandommap=0 00:39:43.454 numjobs=1 00:39:43.454 00:39:43.454 verify_dump=1 00:39:43.454 verify_backlog=512 00:39:43.454 verify_state_save=0 00:39:43.454 do_verify=1 00:39:43.454 verify=crc32c-intel 00:39:43.454 [job0] 00:39:43.454 filename=/dev/nvme0n1 00:39:43.454 [job1] 00:39:43.454 filename=/dev/nvme0n2 00:39:43.454 [job2] 00:39:43.454 filename=/dev/nvme0n3 00:39:43.454 [job3] 00:39:43.454 filename=/dev/nvme0n4 00:39:43.454 Could not set queue depth (nvme0n1) 00:39:43.454 Could not set queue depth (nvme0n2) 00:39:43.454 Could not set queue depth (nvme0n3) 00:39:43.454 Could not set queue depth (nvme0n4) 00:39:43.714 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:43.714 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:43.714 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:43.714 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:43.714 fio-3.35 00:39:43.714 Starting 4 threads 00:39:45.124 00:39:45.124 job0: (groupid=0, jobs=1): err= 0: pid=3708866: Mon Nov 25 14:37:49 2024 00:39:45.124 read: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec) 00:39:45.124 slat (nsec): min=967, max=9042.5k, avg=79049.04, stdev=593759.19 00:39:45.124 clat (usec): min=3247, max=38192, avg=10047.03, stdev=4004.29 00:39:45.124 lat (usec): min=3252, max=38200, avg=10126.08, stdev=4055.06 00:39:45.124 clat percentiles (usec): 00:39:45.124 | 1.00th=[ 3949], 5.00th=[ 6194], 10.00th=[ 6652], 20.00th=[ 7242], 00:39:45.124 | 30.00th=[ 8225], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[10159], 00:39:45.124 | 70.00th=[10683], 80.00th=[11600], 90.00th=[13698], 95.00th=[15270], 00:39:45.124 | 99.00th=[29754], 99.50th=[33162], 99.90th=[37487], 99.95th=[38011], 00:39:45.124 | 99.99th=[38011] 00:39:45.124 write: IOPS=6457, BW=25.2MiB/s (26.5MB/s)(25.4MiB/1007msec); 0 zone resets 00:39:45.124 slat (nsec): min=1536, max=9754.2k, avg=72778.95, stdev=505027.68 00:39:45.124 clat (usec): min=1163, max=38162, avg=10154.89, stdev=7320.28 00:39:45.124 lat (usec): min=1254, max=38166, avg=10227.67, stdev=7367.77 00:39:45.124 clat percentiles (usec): 00:39:45.124 | 1.00th=[ 3261], 5.00th=[ 4490], 10.00th=[ 4883], 20.00th=[ 5604], 00:39:45.124 | 30.00th=[ 6325], 40.00th=[ 6783], 50.00th=[ 7767], 60.00th=[ 8717], 00:39:45.124 | 70.00th=[ 9634], 80.00th=[11076], 90.00th=[22414], 95.00th=[30016], 00:39:45.124 | 99.00th=[34866], 99.50th=[36963], 99.90th=[38011], 99.95th=[38011], 00:39:45.124 | 99.99th=[38011] 00:39:45.124 bw ( KiB/s): min=24576, max=26424, per=27.98%, avg=25500.00, stdev=1306.73, samples=2 00:39:45.124 iops : min= 6144, max= 6606, avg=6375.00, stdev=326.68, samples=2 00:39:45.124 lat (msec) : 2=0.16%, 4=2.31%, 10=63.90%, 20=26.23%, 50=7.41% 00:39:45.124 cpu : usr=5.47%, sys=6.06%, ctx=327, majf=0, minf=1 00:39:45.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:39:45.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:45.124 issued rwts: total=6144,6503,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.124 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:45.124 job1: (groupid=0, jobs=1): err= 0: pid=3708875: Mon Nov 25 14:37:49 2024 00:39:45.124 read: IOPS=6083, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1010msec) 00:39:45.124 slat (nsec): min=949, max=9494.1k, avg=62377.40, stdev=475226.18 00:39:45.124 clat (usec): min=2792, max=28117, avg=8597.99, stdev=3327.41 00:39:45.124 lat (usec): min=2798, max=28124, avg=8660.37, stdev=3357.19 00:39:45.124 clat percentiles (usec): 00:39:45.124 | 1.00th=[ 3720], 5.00th=[ 5080], 10.00th=[ 5735], 20.00th=[ 6390], 00:39:45.124 | 30.00th=[ 6718], 40.00th=[ 7046], 50.00th=[ 7504], 60.00th=[ 8586], 00:39:45.124 | 70.00th=[ 9241], 80.00th=[10945], 90.00th=[12125], 95.00th=[14746], 00:39:45.124 | 99.00th=[22676], 99.50th=[23725], 99.90th=[27657], 99.95th=[28181], 00:39:45.124 | 99.99th=[28181] 00:39:45.124 write: IOPS=6159, BW=24.1MiB/s (25.2MB/s)(24.3MiB/1010msec); 0 zone resets 00:39:45.124 slat (nsec): min=1652, max=47509k, avg=88067.35, stdev=1124199.69 00:39:45.124 clat (usec): min=628, max=160181, avg=9288.08, stdev=8500.55 00:39:45.124 lat (usec): min=852, max=160230, avg=9376.14, stdev=8734.24 00:39:45.124 clat percentiles (usec): 00:39:45.124 | 1.00th=[ 1680], 5.00th=[ 3556], 10.00th=[ 4424], 20.00th=[ 5342], 00:39:45.124 | 30.00th=[ 6128], 40.00th=[ 6587], 50.00th=[ 7046], 60.00th=[ 7832], 00:39:45.124 | 70.00th=[ 9372], 80.00th=[ 10683], 90.00th=[ 15533], 95.00th=[ 24511], 00:39:45.124 | 99.00th=[ 30540], 99.50th=[ 49546], 99.90th=[141558], 99.95th=[160433], 00:39:45.124 | 99.99th=[160433] 00:39:45.124 bw ( KiB/s): min=16480, max=32672, per=26.96%, avg=24576.00, stdev=11449.47, samples=2 00:39:45.124 iops : min= 4120, max= 8168, avg=6144.00, stdev=2862.37, samples=2 00:39:45.124 lat (usec) : 750=0.01%, 1000=0.16% 00:39:45.124 lat (msec) : 2=0.54%, 4=2.90%, 10=71.52%, 20=19.99%, 50=4.76% 00:39:45.124 lat (msec) : 100=0.06%, 250=0.06% 00:39:45.124 cpu : usr=4.76%, sys=6.74%, ctx=461, majf=0, minf=1 00:39:45.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:39:45.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:45.124 issued rwts: total=6144,6221,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.124 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:45.124 job2: (groupid=0, jobs=1): err= 0: pid=3708891: Mon Nov 25 14:37:49 2024 00:39:45.124 read: IOPS=6559, BW=25.6MiB/s (26.9MB/s)(25.7MiB/1004msec) 00:39:45.124 slat (nsec): min=928, max=10967k, avg=72589.17, stdev=510540.04 00:39:45.124 clat (usec): min=1000, max=39245, avg=9587.41, stdev=5041.09 00:39:45.124 lat (usec): min=2721, max=40272, avg=9660.00, stdev=5077.65 00:39:45.124 clat percentiles (usec): 00:39:45.124 | 1.00th=[ 3785], 5.00th=[ 4948], 10.00th=[ 5866], 20.00th=[ 6390], 00:39:45.124 | 30.00th=[ 7177], 40.00th=[ 7963], 50.00th=[ 8291], 60.00th=[ 8717], 00:39:45.124 | 70.00th=[ 9634], 80.00th=[10683], 90.00th=[14877], 95.00th=[21103], 00:39:45.124 | 99.00th=[31851], 99.50th=[34341], 99.90th=[39060], 99.95th=[39060], 00:39:45.124 | 99.99th=[39060] 00:39:45.124 write: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec); 0 zone resets 00:39:45.124 slat (nsec): min=1570, max=12449k, avg=73373.26, stdev=482340.59 00:39:45.124 clat (usec): min=311, max=32065, avg=9643.08, stdev=4651.88 00:39:45.124 lat (usec): min=338, max=32807, avg=9716.45, stdev=4691.21 00:39:45.124 clat percentiles (usec): 00:39:45.124 | 1.00th=[ 3359], 5.00th=[ 4293], 10.00th=[ 5604], 20.00th=[ 6325], 00:39:45.124 | 30.00th=[ 7177], 40.00th=[ 7701], 50.00th=[ 8029], 60.00th=[ 8455], 00:39:45.124 | 70.00th=[10945], 80.00th=[12780], 90.00th=[16909], 95.00th=[19268], 00:39:45.124 | 99.00th=[25035], 99.50th=[27132], 99.90th=[32113], 99.95th=[32113], 00:39:45.124 | 99.99th=[32113] 00:39:45.124 bw ( KiB/s): min=24520, max=28728, per=29.21%, avg=26624.00, stdev=2975.51, samples=2 00:39:45.124 iops : min= 6130, max= 7182, avg=6656.00, stdev=743.88, samples=2 00:39:45.124 lat (usec) : 500=0.02%, 1000=0.01% 00:39:45.124 lat (msec) : 2=0.08%, 4=2.10%, 10=69.05%, 20=23.76%, 50=4.98% 00:39:45.124 cpu : usr=3.99%, sys=7.28%, ctx=643, majf=0, minf=1 00:39:45.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:39:45.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:45.124 issued rwts: total=6586,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.124 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:45.124 job3: (groupid=0, jobs=1): err= 0: pid=3708897: Mon Nov 25 14:37:49 2024 00:39:45.124 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:39:45.124 slat (nsec): min=1029, max=14231k, avg=128029.84, stdev=901732.77 00:39:45.124 clat (usec): min=6178, max=44004, avg=17555.95, stdev=7587.73 00:39:45.124 lat (usec): min=7159, max=44032, avg=17683.98, stdev=7655.73 00:39:45.124 clat percentiles (usec): 00:39:45.124 | 1.00th=[ 7308], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9896], 00:39:45.124 | 30.00th=[11731], 40.00th=[14484], 50.00th=[16319], 60.00th=[18744], 00:39:45.124 | 70.00th=[21365], 80.00th=[23725], 90.00th=[28705], 95.00th=[31851], 00:39:45.124 | 99.00th=[40109], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:45.124 | 99.99th=[43779] 00:39:45.124 write: IOPS=3635, BW=14.2MiB/s (14.9MB/s)(14.4MiB/1012msec); 0 zone resets 00:39:45.124 slat (usec): min=2, max=16954, avg=141.73, stdev=908.59 00:39:45.124 clat (usec): min=5438, max=48990, avg=17492.07, stdev=10514.22 00:39:45.124 lat (usec): min=5447, max=49015, avg=17633.80, stdev=10605.79 00:39:45.124 clat percentiles (usec): 00:39:45.124 | 1.00th=[ 7242], 5.00th=[ 7701], 10.00th=[ 8094], 20.00th=[ 8979], 00:39:45.124 | 30.00th=[10421], 40.00th=[11207], 50.00th=[13304], 60.00th=[16581], 00:39:45.124 | 70.00th=[20579], 80.00th=[23462], 90.00th=[33817], 95.00th=[43254], 00:39:45.124 | 99.00th=[46924], 99.50th=[47973], 99.90th=[49021], 99.95th=[49021], 00:39:45.124 | 99.99th=[49021] 00:39:45.124 bw ( KiB/s): min=12232, max=16440, per=15.73%, avg=14336.00, stdev=2975.51, samples=2 00:39:45.124 iops : min= 3058, max= 4110, avg=3584.00, stdev=743.88, samples=2 00:39:45.124 lat (msec) : 10=22.83%, 20=44.49%, 50=32.69% 00:39:45.124 cpu : usr=3.76%, sys=4.06%, ctx=231, majf=0, minf=1 00:39:45.124 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:39:45.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:45.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:45.124 issued rwts: total=3584,3679,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:45.124 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:45.124 00:39:45.124 Run status group 0 (all jobs): 00:39:45.124 READ: bw=86.7MiB/s (90.9MB/s), 13.8MiB/s-25.6MiB/s (14.5MB/s-26.9MB/s), io=87.7MiB (92.0MB), run=1004-1012msec 00:39:45.124 WRITE: bw=89.0MiB/s (93.3MB/s), 14.2MiB/s-25.9MiB/s (14.9MB/s-27.2MB/s), io=90.1MiB (94.4MB), run=1004-1012msec 00:39:45.124 00:39:45.124 Disk stats (read/write): 00:39:45.124 nvme0n1: ios=5148/5212, merge=0/0, ticks=46496/52737, in_queue=99233, util=87.78% 00:39:45.124 nvme0n2: ios=4637/4692, merge=0/0, ticks=36847/38818, in_queue=75665, util=97.04% 00:39:45.124 nvme0n3: ios=5120/5519, merge=0/0, ticks=28071/32239, in_queue=60310, util=88.30% 00:39:45.125 nvme0n4: ios=3108/3431, merge=0/0, ticks=23260/27285, in_queue=50545, util=100.00% 00:39:45.125 14:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:39:45.125 [global] 00:39:45.125 thread=1 00:39:45.125 invalidate=1 00:39:45.125 rw=randwrite 00:39:45.125 time_based=1 00:39:45.125 runtime=1 00:39:45.125 ioengine=libaio 00:39:45.125 direct=1 00:39:45.125 bs=4096 00:39:45.125 iodepth=128 00:39:45.125 norandommap=0 00:39:45.125 numjobs=1 00:39:45.125 00:39:45.125 verify_dump=1 00:39:45.125 verify_backlog=512 00:39:45.125 verify_state_save=0 00:39:45.125 do_verify=1 00:39:45.125 verify=crc32c-intel 00:39:45.125 [job0] 00:39:45.125 filename=/dev/nvme0n1 00:39:45.125 [job1] 00:39:45.125 filename=/dev/nvme0n2 00:39:45.125 [job2] 00:39:45.125 filename=/dev/nvme0n3 00:39:45.125 [job3] 00:39:45.125 filename=/dev/nvme0n4 00:39:45.125 Could not set queue depth (nvme0n1) 00:39:45.125 Could not set queue depth (nvme0n2) 00:39:45.125 Could not set queue depth (nvme0n3) 00:39:45.125 Could not set queue depth (nvme0n4) 00:39:45.384 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:45.384 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:45.384 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:45.384 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:45.384 fio-3.35 00:39:45.384 Starting 4 threads 00:39:46.779 00:39:46.779 job0: (groupid=0, jobs=1): err= 0: pid=3709325: Mon Nov 25 14:37:51 2024 00:39:46.779 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:39:46.779 slat (nsec): min=942, max=15619k, avg=174082.64, stdev=1131551.56 00:39:46.779 clat (usec): min=6690, max=60253, avg=22598.29, stdev=13310.62 00:39:46.779 lat (usec): min=6693, max=61151, avg=22772.38, stdev=13398.76 00:39:46.779 clat percentiles (usec): 00:39:46.779 | 1.00th=[ 6849], 5.00th=[ 7635], 10.00th=[ 8455], 20.00th=[11469], 00:39:46.779 | 30.00th=[13566], 40.00th=[14615], 50.00th=[15926], 60.00th=[19792], 00:39:46.779 | 70.00th=[32113], 80.00th=[37487], 90.00th=[43254], 95.00th=[46400], 00:39:46.779 | 99.00th=[53740], 99.50th=[56886], 99.90th=[60031], 99.95th=[60031], 00:39:46.779 | 99.99th=[60031] 00:39:46.779 write: IOPS=3321, BW=13.0MiB/s (13.6MB/s)(13.1MiB/1007msec); 0 zone resets 00:39:46.779 slat (nsec): min=1589, max=12779k, avg=135118.97, stdev=834791.66 00:39:46.779 clat (usec): min=724, max=50400, avg=17045.78, stdev=10412.68 00:39:46.779 lat (usec): min=5923, max=50404, avg=17180.90, stdev=10479.65 00:39:46.779 clat percentiles (usec): 00:39:46.779 | 1.00th=[ 6325], 5.00th=[ 7701], 10.00th=[ 7963], 20.00th=[ 8455], 00:39:46.779 | 30.00th=[ 9503], 40.00th=[11863], 50.00th=[12518], 60.00th=[14222], 00:39:46.779 | 70.00th=[18482], 80.00th=[26608], 90.00th=[35390], 95.00th=[38536], 00:39:46.779 | 99.00th=[45876], 99.50th=[49021], 99.90th=[50594], 99.95th=[50594], 00:39:46.779 | 99.99th=[50594] 00:39:46.779 bw ( KiB/s): min=12288, max=13448, per=14.58%, avg=12868.00, stdev=820.24, samples=2 00:39:46.779 iops : min= 3072, max= 3362, avg=3217.00, stdev=205.06, samples=2 00:39:46.779 lat (usec) : 750=0.02% 00:39:46.779 lat (msec) : 10=24.37%, 20=41.98%, 50=32.30%, 100=1.32% 00:39:46.779 cpu : usr=1.89%, sys=4.08%, ctx=301, majf=0, minf=2 00:39:46.779 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:39:46.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:46.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:46.779 issued rwts: total=3072,3345,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:46.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:46.779 job1: (groupid=0, jobs=1): err= 0: pid=3709331: Mon Nov 25 14:37:51 2024 00:39:46.779 read: IOPS=6933, BW=27.1MiB/s (28.4MB/s)(27.2MiB/1006msec) 00:39:46.779 slat (nsec): min=902, max=8131.1k, avg=69047.88, stdev=518666.12 00:39:46.779 clat (usec): min=2201, max=23250, avg=8954.75, stdev=2645.18 00:39:46.779 lat (usec): min=2245, max=23274, avg=9023.80, stdev=2683.87 00:39:46.779 clat percentiles (usec): 00:39:46.779 | 1.00th=[ 4424], 5.00th=[ 5342], 10.00th=[ 6521], 20.00th=[ 6980], 00:39:46.779 | 30.00th=[ 7439], 40.00th=[ 7832], 50.00th=[ 8356], 60.00th=[ 8848], 00:39:46.779 | 70.00th=[10028], 80.00th=[10683], 90.00th=[12649], 95.00th=[14877], 00:39:46.779 | 99.00th=[16450], 99.50th=[17957], 99.90th=[18482], 99.95th=[20841], 00:39:46.779 | 99.99th=[23200] 00:39:46.779 write: IOPS=7125, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1006msec); 0 zone resets 00:39:46.779 slat (nsec): min=1530, max=13794k, avg=65213.80, stdev=541728.22 00:39:46.779 clat (usec): min=936, max=49168, avg=9055.34, stdev=6023.41 00:39:46.779 lat (usec): min=945, max=49197, avg=9120.55, stdev=6074.85 00:39:46.779 clat percentiles (usec): 00:39:46.779 | 1.00th=[ 1729], 5.00th=[ 4047], 10.00th=[ 4621], 20.00th=[ 5538], 00:39:46.779 | 30.00th=[ 6325], 40.00th=[ 6718], 50.00th=[ 7635], 60.00th=[ 8160], 00:39:46.779 | 70.00th=[ 8979], 80.00th=[10421], 90.00th=[14615], 95.00th=[21103], 00:39:46.779 | 99.00th=[35390], 99.50th=[40109], 99.90th=[40109], 99.95th=[42206], 00:39:46.779 | 99.99th=[49021] 00:39:46.779 bw ( KiB/s): min=24576, max=32768, per=32.49%, avg=28672.00, stdev=5792.62, samples=2 00:39:46.779 iops : min= 6144, max= 8192, avg=7168.00, stdev=1448.15, samples=2 00:39:46.779 lat (usec) : 1000=0.02% 00:39:46.779 lat (msec) : 2=0.69%, 4=1.77%, 10=72.06%, 20=22.27%, 50=3.19% 00:39:46.779 cpu : usr=4.58%, sys=7.36%, ctx=369, majf=0, minf=1 00:39:46.779 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:39:46.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:46.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:46.779 issued rwts: total=6975,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:46.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:46.779 job2: (groupid=0, jobs=1): err= 0: pid=3709340: Mon Nov 25 14:37:51 2024 00:39:46.779 read: IOPS=6649, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1001msec) 00:39:46.779 slat (nsec): min=973, max=9163.4k, avg=74139.17, stdev=512771.19 00:39:46.779 clat (usec): min=1548, max=43533, avg=9892.60, stdev=3742.73 00:39:46.779 lat (usec): min=1553, max=45325, avg=9966.74, stdev=3771.51 00:39:46.779 clat percentiles (usec): 00:39:46.779 | 1.00th=[ 3982], 5.00th=[ 5735], 10.00th=[ 6718], 20.00th=[ 7504], 00:39:46.779 | 30.00th=[ 7832], 40.00th=[ 8225], 50.00th=[ 8586], 60.00th=[ 9372], 00:39:46.779 | 70.00th=[10290], 80.00th=[11994], 90.00th=[15795], 95.00th=[17695], 00:39:46.779 | 99.00th=[21627], 99.50th=[21627], 99.90th=[25560], 99.95th=[43779], 00:39:46.779 | 99.99th=[43779] 00:39:46.779 write: IOPS=6991, BW=27.3MiB/s (28.6MB/s)(27.3MiB/1001msec); 0 zone resets 00:39:46.779 slat (nsec): min=1603, max=8397.2k, avg=64048.30, stdev=398007.34 00:39:46.779 clat (usec): min=534, max=23480, avg=8657.00, stdev=2826.91 00:39:46.779 lat (usec): min=542, max=23504, avg=8721.05, stdev=2853.14 00:39:46.779 clat percentiles (usec): 00:39:46.779 | 1.00th=[ 2245], 5.00th=[ 5014], 10.00th=[ 6128], 20.00th=[ 6849], 00:39:46.779 | 30.00th=[ 7504], 40.00th=[ 7767], 50.00th=[ 7963], 60.00th=[ 8160], 00:39:46.779 | 70.00th=[ 9110], 80.00th=[10290], 90.00th=[12911], 95.00th=[14484], 00:39:46.779 | 99.00th=[16909], 99.50th=[17695], 99.90th=[19792], 99.95th=[20055], 00:39:46.779 | 99.99th=[23462] 00:39:46.779 bw ( KiB/s): min=26296, max=28672, per=31.14%, avg=27484.00, stdev=1680.09, samples=2 00:39:46.779 iops : min= 6574, max= 7168, avg=6871.00, stdev=420.02, samples=2 00:39:46.779 lat (usec) : 750=0.02%, 1000=0.07% 00:39:46.779 lat (msec) : 2=0.34%, 4=1.42%, 10=71.83%, 20=24.94%, 50=1.38% 00:39:46.779 cpu : usr=3.80%, sys=6.70%, ctx=530, majf=0, minf=1 00:39:46.779 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:39:46.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:46.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:46.779 issued rwts: total=6656,6998,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:46.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:46.779 job3: (groupid=0, jobs=1): err= 0: pid=3709347: Mon Nov 25 14:37:51 2024 00:39:46.779 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:39:46.779 slat (nsec): min=940, max=14403k, avg=111502.25, stdev=786202.33 00:39:46.779 clat (usec): min=1698, max=41992, avg=14548.25, stdev=7855.65 00:39:46.779 lat (usec): min=1709, max=42017, avg=14659.76, stdev=7926.57 00:39:46.779 clat percentiles (usec): 00:39:46.779 | 1.00th=[ 4080], 5.00th=[ 6587], 10.00th=[ 7701], 20.00th=[ 8848], 00:39:46.779 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[11076], 60.00th=[12780], 00:39:46.779 | 70.00th=[15401], 80.00th=[23200], 90.00th=[27657], 95.00th=[30278], 00:39:46.779 | 99.00th=[35914], 99.50th=[38011], 99.90th=[41681], 99.95th=[41681], 00:39:46.779 | 99.99th=[42206] 00:39:46.779 write: IOPS=4678, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1006msec); 0 zone resets 00:39:46.779 slat (nsec): min=1518, max=12876k, avg=97562.43, stdev=728247.16 00:39:46.779 clat (usec): min=646, max=41525, avg=12836.64, stdev=7073.01 00:39:46.779 lat (usec): min=830, max=41554, avg=12934.20, stdev=7136.81 00:39:46.779 clat percentiles (usec): 00:39:46.779 | 1.00th=[ 1745], 5.00th=[ 5604], 10.00th=[ 6390], 20.00th=[ 7767], 00:39:46.779 | 30.00th=[ 8455], 40.00th=[ 9372], 50.00th=[10421], 60.00th=[11600], 00:39:46.779 | 70.00th=[14484], 80.00th=[18744], 90.00th=[24511], 95.00th=[27919], 00:39:46.779 | 99.00th=[34341], 99.50th=[34866], 99.90th=[36963], 99.95th=[37487], 00:39:46.779 | 99.99th=[41681] 00:39:46.779 bw ( KiB/s): min=12288, max=24576, per=20.89%, avg=18432.00, stdev=8688.93, samples=2 00:39:46.779 iops : min= 3072, max= 6144, avg=4608.00, stdev=2172.23, samples=2 00:39:46.779 lat (usec) : 750=0.01%, 1000=0.01% 00:39:46.779 lat (msec) : 2=0.62%, 4=1.29%, 10=41.22%, 20=36.32%, 50=20.53% 00:39:46.779 cpu : usr=3.08%, sys=5.17%, ctx=286, majf=0, minf=1 00:39:46.779 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:39:46.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:46.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:46.779 issued rwts: total=4608,4707,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:46.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:46.779 00:39:46.779 Run status group 0 (all jobs): 00:39:46.779 READ: bw=82.7MiB/s (86.7MB/s), 11.9MiB/s-27.1MiB/s (12.5MB/s-28.4MB/s), io=83.2MiB (87.3MB), run=1001-1007msec 00:39:46.779 WRITE: bw=86.2MiB/s (90.4MB/s), 13.0MiB/s-27.8MiB/s (13.6MB/s-29.2MB/s), io=86.8MiB (91.0MB), run=1001-1007msec 00:39:46.780 00:39:46.780 Disk stats (read/write): 00:39:46.780 nvme0n1: ios=2762/3072, merge=0/0, ticks=18208/15436, in_queue=33644, util=99.90% 00:39:46.780 nvme0n2: ios=5669/5831, merge=0/0, ticks=31554/30627, in_queue=62181, util=93.57% 00:39:46.780 nvme0n3: ios=5403/5632, merge=0/0, ticks=29833/26511, in_queue=56344, util=96.83% 00:39:46.780 nvme0n4: ios=4096/4450, merge=0/0, ticks=25067/25474, in_queue=50541, util=89.20% 00:39:46.780 14:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:39:46.780 14:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3709638 00:39:46.780 14:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:39:46.780 14:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:39:46.780 [global] 00:39:46.780 thread=1 00:39:46.780 invalidate=1 00:39:46.780 rw=read 00:39:46.780 time_based=1 00:39:46.780 runtime=10 00:39:46.780 ioengine=libaio 00:39:46.780 direct=1 00:39:46.780 bs=4096 00:39:46.780 iodepth=1 00:39:46.780 norandommap=1 00:39:46.780 numjobs=1 00:39:46.780 00:39:46.780 [job0] 00:39:46.780 filename=/dev/nvme0n1 00:39:46.780 [job1] 00:39:46.780 filename=/dev/nvme0n2 00:39:46.780 [job2] 00:39:46.780 filename=/dev/nvme0n3 00:39:46.780 [job3] 00:39:46.780 filename=/dev/nvme0n4 00:39:46.780 Could not set queue depth (nvme0n1) 00:39:46.780 Could not set queue depth (nvme0n2) 00:39:46.780 Could not set queue depth (nvme0n3) 00:39:46.780 Could not set queue depth (nvme0n4) 00:39:47.043 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:47.043 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:47.043 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:47.043 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:47.043 fio-3.35 00:39:47.043 Starting 4 threads 00:39:49.572 14:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:39:49.830 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=258048, buflen=4096 00:39:49.830 fio: pid=3709835, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:49.830 14:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:39:49.830 14:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:49.830 14:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:39:49.830 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=274432, buflen=4096 00:39:49.830 fio: pid=3709831, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:50.088 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=2019328, buflen=4096 00:39:50.088 fio: pid=3709825, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:50.088 14:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:50.088 14:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:39:50.348 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=1138688, buflen=4096 00:39:50.348 fio: pid=3709826, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:50.348 14:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:50.348 14:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:39:50.348 00:39:50.348 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3709825: Mon Nov 25 14:37:55 2024 00:39:50.348 read: IOPS=166, BW=666KiB/s (682kB/s)(1972KiB/2961msec) 00:39:50.348 slat (usec): min=7, max=17556, avg=89.27, stdev=1001.53 00:39:50.348 clat (usec): min=592, max=42065, avg=5855.13, stdev=13139.32 00:39:50.348 lat (usec): min=630, max=42092, avg=5944.53, stdev=13155.46 00:39:50.348 clat percentiles (usec): 00:39:50.348 | 1.00th=[ 807], 5.00th=[ 906], 10.00th=[ 955], 20.00th=[ 996], 00:39:50.348 | 30.00th=[ 1029], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1123], 00:39:50.348 | 70.00th=[ 1139], 80.00th=[ 1188], 90.00th=[41157], 95.00th=[42206], 00:39:50.348 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:50.348 | 99.99th=[42206] 00:39:50.348 bw ( KiB/s): min= 96, max= 1584, per=43.42%, avg=499.20, stdev=648.09, samples=5 00:39:50.348 iops : min= 24, max= 396, avg=124.80, stdev=162.02, samples=5 00:39:50.348 lat (usec) : 750=0.40%, 1000=21.66% 00:39:50.348 lat (msec) : 2=65.99%, 50=11.74% 00:39:50.348 cpu : usr=0.17%, sys=0.51%, ctx=499, majf=0, minf=1 00:39:50.348 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:50.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.348 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.348 issued rwts: total=494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.348 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:50.348 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3709826: Mon Nov 25 14:37:55 2024 00:39:50.348 read: IOPS=88, BW=355KiB/s (363kB/s)(1112KiB/3136msec) 00:39:50.348 slat (usec): min=24, max=20808, avg=278.29, stdev=2108.56 00:39:50.348 clat (usec): min=821, max=42183, avg=10915.16, stdev=17452.51 00:39:50.348 lat (usec): min=848, max=42208, avg=11194.36, stdev=17435.88 00:39:50.348 clat percentiles (usec): 00:39:50.348 | 1.00th=[ 857], 5.00th=[ 922], 10.00th=[ 947], 20.00th=[ 979], 00:39:50.348 | 30.00th=[ 1020], 40.00th=[ 1090], 50.00th=[ 1172], 60.00th=[ 1205], 00:39:50.348 | 70.00th=[ 1287], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:39:50.348 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:50.348 | 99.99th=[42206] 00:39:50.348 bw ( KiB/s): min= 96, max= 657, per=28.98%, avg=333.50, stdev=262.11, samples=6 00:39:50.348 iops : min= 24, max= 164, avg=83.33, stdev=65.47, samples=6 00:39:50.348 lat (usec) : 1000=26.88% 00:39:50.348 lat (msec) : 2=48.39%, 4=0.36%, 50=24.01% 00:39:50.348 cpu : usr=0.06%, sys=0.35%, ctx=283, majf=0, minf=2 00:39:50.348 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:50.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.348 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.348 issued rwts: total=279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.348 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:50.348 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3709831: Mon Nov 25 14:37:55 2024 00:39:50.348 read: IOPS=24, BW=95.6KiB/s (97.9kB/s)(268KiB/2802msec) 00:39:50.348 slat (usec): min=26, max=17677, avg=286.65, stdev=2140.43 00:39:50.348 clat (usec): min=904, max=42070, avg=41201.77, stdev=5008.48 00:39:50.348 lat (usec): min=943, max=58962, avg=41492.29, stdev=5455.70 00:39:50.348 clat percentiles (usec): 00:39:50.348 | 1.00th=[ 906], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:39:50.348 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:39:50.348 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:50.348 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:50.348 | 99.99th=[42206] 00:39:50.348 bw ( KiB/s): min= 96, max= 96, per=8.35%, avg=96.00, stdev= 0.00, samples=5 00:39:50.348 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:39:50.348 lat (usec) : 1000=1.47% 00:39:50.348 lat (msec) : 50=97.06% 00:39:50.348 cpu : usr=0.00%, sys=0.14%, ctx=69, majf=0, minf=2 00:39:50.348 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:50.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.348 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.348 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.348 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:50.348 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3709835: Mon Nov 25 14:37:55 2024 00:39:50.348 read: IOPS=24, BW=97.0KiB/s (99.4kB/s)(252KiB/2597msec) 00:39:50.348 slat (nsec): min=26917, max=35268, avg=27526.92, stdev=1253.55 00:39:50.348 clat (usec): min=799, max=42171, avg=40845.90, stdev=5149.90 00:39:50.348 lat (usec): min=834, max=42204, avg=40873.43, stdev=5148.92 00:39:50.348 clat percentiles (usec): 00:39:50.348 | 1.00th=[ 799], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:50.348 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:39:50.348 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:50.348 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:50.348 | 99.99th=[42206] 00:39:50.348 bw ( KiB/s): min= 96, max= 104, per=8.44%, avg=97.60, stdev= 3.58, samples=5 00:39:50.348 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:39:50.348 lat (usec) : 1000=1.56% 00:39:50.348 lat (msec) : 50=96.88% 00:39:50.348 cpu : usr=0.00%, sys=0.12%, ctx=64, majf=0, minf=2 00:39:50.348 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:50.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.348 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:50.348 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:50.348 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:50.348 00:39:50.348 Run status group 0 (all jobs): 00:39:50.348 READ: bw=1149KiB/s (1177kB/s), 95.6KiB/s-666KiB/s (97.9kB/s-682kB/s), io=3604KiB (3690kB), run=2597-3136msec 00:39:50.348 00:39:50.348 Disk stats (read/write): 00:39:50.348 nvme0n1: ios=501/0, merge=0/0, ticks=3621/0, in_queue=3621, util=98.96% 00:39:50.348 nvme0n2: ios=264/0, merge=0/0, ticks=3013/0, in_queue=3013, util=93.87% 00:39:50.348 nvme0n3: ios=62/0, merge=0/0, ticks=2553/0, in_queue=2553, util=95.96% 00:39:50.348 nvme0n4: ios=64/0, merge=0/0, ticks=2585/0, in_queue=2585, util=96.35% 00:39:50.348 14:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:50.348 14:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:39:50.607 14:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:50.607 14:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:39:50.866 14:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:50.866 14:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:39:50.866 14:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:50.866 14:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:39:51.125 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:39:51.125 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3709638 00:39:51.125 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:39:51.125 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:51.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:51.125 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:51.384 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:39:51.384 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:51.384 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:51.384 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:51.384 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:51.384 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:39:51.384 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:39:51.384 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:39:51.384 nvmf hotplug test: fio failed as expected 00:39:51.384 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:51.384 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:39:51.384 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:39:51.384 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:39:51.384 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:39:51.384 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:39:51.384 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:51.384 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:39:51.384 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:51.384 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:39:51.384 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:51.384 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:51.384 rmmod nvme_tcp 00:39:51.644 rmmod nvme_fabrics 00:39:51.644 rmmod nvme_keyring 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3706457 ']' 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3706457 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3706457 ']' 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3706457 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3706457 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3706457' 00:39:51.644 killing process with pid 3706457 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3706457 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3706457 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:51.644 14:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:54.189 14:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:54.189 00:39:54.189 real 0m28.187s 00:39:54.189 user 2m16.780s 00:39:54.189 sys 0m11.895s 00:39:54.189 14:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:54.189 14:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:54.189 ************************************ 00:39:54.189 END TEST nvmf_fio_target 00:39:54.189 ************************************ 00:39:54.189 14:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:54.189 14:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:54.189 14:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:54.189 14:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:54.189 ************************************ 00:39:54.189 START TEST nvmf_bdevio 00:39:54.189 ************************************ 00:39:54.189 14:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:54.189 * Looking for test storage... 00:39:54.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:54.189 14:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:54.189 14:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:39:54.189 14:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:54.189 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:54.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:54.189 --rc genhtml_branch_coverage=1 00:39:54.189 --rc genhtml_function_coverage=1 00:39:54.189 --rc genhtml_legend=1 00:39:54.189 --rc geninfo_all_blocks=1 00:39:54.189 --rc geninfo_unexecuted_blocks=1 00:39:54.189 00:39:54.189 ' 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:54.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:54.190 --rc genhtml_branch_coverage=1 00:39:54.190 --rc genhtml_function_coverage=1 00:39:54.190 --rc genhtml_legend=1 00:39:54.190 --rc geninfo_all_blocks=1 00:39:54.190 --rc geninfo_unexecuted_blocks=1 00:39:54.190 00:39:54.190 ' 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:54.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:54.190 --rc genhtml_branch_coverage=1 00:39:54.190 --rc genhtml_function_coverage=1 00:39:54.190 --rc genhtml_legend=1 00:39:54.190 --rc geninfo_all_blocks=1 00:39:54.190 --rc geninfo_unexecuted_blocks=1 00:39:54.190 00:39:54.190 ' 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:54.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:54.190 --rc genhtml_branch_coverage=1 00:39:54.190 --rc genhtml_function_coverage=1 00:39:54.190 --rc genhtml_legend=1 00:39:54.190 --rc geninfo_all_blocks=1 00:39:54.190 --rc geninfo_unexecuted_blocks=1 00:39:54.190 00:39:54.190 ' 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:39:54.190 14:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:02.339 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:02.339 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:40:02.339 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:02.339 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:02.339 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:02.339 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:02.339 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:02.339 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:40:02.339 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:02.339 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:40:02.339 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:40:02.339 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:40:02.339 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:40:02.339 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:40:02.339 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:40:02.339 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:02.339 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:02.339 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:02.339 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:02.340 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:02.340 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:02.340 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:02.340 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:02.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:02.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.517 ms 00:40:02.340 00:40:02.340 --- 10.0.0.2 ping statistics --- 00:40:02.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:02.340 rtt min/avg/max/mdev = 0.517/0.517/0.517/0.000 ms 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:02.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:02.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:40:02.340 00:40:02.340 --- 10.0.0.1 ping statistics --- 00:40:02.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:02.340 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:02.340 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3714866 00:40:02.341 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3714866 00:40:02.341 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:40:02.341 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3714866 ']' 00:40:02.341 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:02.341 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:02.341 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:02.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:02.341 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:02.341 14:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:02.341 [2024-11-25 14:38:06.734263] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:02.341 [2024-11-25 14:38:06.735394] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:40:02.341 [2024-11-25 14:38:06.735447] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:02.341 [2024-11-25 14:38:06.834157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:02.341 [2024-11-25 14:38:06.886705] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:02.341 [2024-11-25 14:38:06.886758] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:02.341 [2024-11-25 14:38:06.886766] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:02.341 [2024-11-25 14:38:06.886773] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:02.341 [2024-11-25 14:38:06.886780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:02.341 [2024-11-25 14:38:06.888914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:02.341 [2024-11-25 14:38:06.889076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:40:02.341 [2024-11-25 14:38:06.889229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:02.341 [2024-11-25 14:38:06.889229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:40:02.341 [2024-11-25 14:38:06.965771] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:02.341 [2024-11-25 14:38:06.966988] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:02.341 [2024-11-25 14:38:06.967034] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:02.341 [2024-11-25 14:38:06.967454] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:02.341 [2024-11-25 14:38:06.967507] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:02.603 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:02.603 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:40:02.603 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:02.603 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:02.603 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:02.603 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:02.603 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:02.603 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.603 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:02.603 [2024-11-25 14:38:07.598104] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:02.603 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.603 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:02.603 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.603 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:02.603 Malloc0 00:40:02.603 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.603 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:02.603 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.603 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:02.603 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.603 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:02.603 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.603 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:02.603 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.603 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:02.603 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:02.603 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:02.864 [2024-11-25 14:38:07.694364] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:02.864 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:02.864 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:40:02.864 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:40:02.864 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:40:02.864 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:40:02.864 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:02.864 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:02.864 { 00:40:02.864 "params": { 00:40:02.864 "name": "Nvme$subsystem", 00:40:02.864 "trtype": "$TEST_TRANSPORT", 00:40:02.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:02.864 "adrfam": "ipv4", 00:40:02.864 "trsvcid": "$NVMF_PORT", 00:40:02.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:02.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:02.864 "hdgst": ${hdgst:-false}, 00:40:02.864 "ddgst": ${ddgst:-false} 00:40:02.864 }, 00:40:02.864 "method": "bdev_nvme_attach_controller" 00:40:02.864 } 00:40:02.864 EOF 00:40:02.864 )") 00:40:02.864 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:40:02.864 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:40:02.864 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:40:02.864 14:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:02.864 "params": { 00:40:02.864 "name": "Nvme1", 00:40:02.864 "trtype": "tcp", 00:40:02.864 "traddr": "10.0.0.2", 00:40:02.864 "adrfam": "ipv4", 00:40:02.864 "trsvcid": "4420", 00:40:02.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:02.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:02.864 "hdgst": false, 00:40:02.864 "ddgst": false 00:40:02.864 }, 00:40:02.864 "method": "bdev_nvme_attach_controller" 00:40:02.864 }' 00:40:02.864 [2024-11-25 14:38:07.752666] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:40:02.864 [2024-11-25 14:38:07.752739] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3715197 ] 00:40:02.864 [2024-11-25 14:38:07.847595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:02.864 [2024-11-25 14:38:07.904207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:02.864 [2024-11-25 14:38:07.904293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:02.864 [2024-11-25 14:38:07.904310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:03.124 I/O targets: 00:40:03.124 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:40:03.124 00:40:03.124 00:40:03.124 CUnit - A unit testing framework for C - Version 2.1-3 00:40:03.124 http://cunit.sourceforge.net/ 00:40:03.124 00:40:03.124 00:40:03.124 Suite: bdevio tests on: Nvme1n1 00:40:03.385 Test: blockdev write read block ...passed 00:40:03.385 Test: blockdev write zeroes read block ...passed 00:40:03.385 Test: blockdev write zeroes read no split ...passed 00:40:03.385 Test: blockdev write zeroes read split ...passed 00:40:03.385 Test: blockdev write zeroes read split partial ...passed 00:40:03.385 Test: blockdev reset ...[2024-11-25 14:38:08.316248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:40:03.385 [2024-11-25 14:38:08.316333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15e1400 (9): Bad file descriptor 00:40:03.385 [2024-11-25 14:38:08.365094] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:40:03.385 passed 00:40:03.385 Test: blockdev write read 8 blocks ...passed 00:40:03.385 Test: blockdev write read size > 128k ...passed 00:40:03.385 Test: blockdev write read invalid size ...passed 00:40:03.645 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:03.645 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:03.645 Test: blockdev write read max offset ...passed 00:40:03.645 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:03.645 Test: blockdev writev readv 8 blocks ...passed 00:40:03.645 Test: blockdev writev readv 30 x 1block ...passed 00:40:03.645 Test: blockdev writev readv block ...passed 00:40:03.645 Test: blockdev writev readv size > 128k ...passed 00:40:03.645 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:03.645 Test: blockdev comparev and writev ...[2024-11-25 14:38:08.669260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:03.645 [2024-11-25 14:38:08.669309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:03.645 [2024-11-25 14:38:08.669333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:03.645 [2024-11-25 14:38:08.669342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:03.645 [2024-11-25 14:38:08.669822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:03.645 [2024-11-25 14:38:08.669835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:03.645 [2024-11-25 14:38:08.669849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:03.645 [2024-11-25 14:38:08.669858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:03.645 [2024-11-25 14:38:08.670342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:03.645 [2024-11-25 14:38:08.670355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:03.645 [2024-11-25 14:38:08.670369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:03.645 [2024-11-25 14:38:08.670377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:03.645 [2024-11-25 14:38:08.670836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:03.645 [2024-11-25 14:38:08.670848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:03.645 [2024-11-25 14:38:08.670861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:03.645 [2024-11-25 14:38:08.670870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:03.645 passed 00:40:03.907 Test: blockdev nvme passthru rw ...passed 00:40:03.907 Test: blockdev nvme passthru vendor specific ...[2024-11-25 14:38:08.754846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:03.907 [2024-11-25 14:38:08.754861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:03.907 [2024-11-25 14:38:08.755128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:03.907 [2024-11-25 14:38:08.755139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:03.907 [2024-11-25 14:38:08.755405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:03.907 [2024-11-25 14:38:08.755416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:03.907 [2024-11-25 14:38:08.755653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:03.907 [2024-11-25 14:38:08.755663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:03.907 passed 00:40:03.907 Test: blockdev nvme admin passthru ...passed 00:40:03.907 Test: blockdev copy ...passed 00:40:03.907 00:40:03.907 Run Summary: Type Total Ran Passed Failed Inactive 00:40:03.907 suites 1 1 n/a 0 0 00:40:03.907 tests 23 23 23 0 0 00:40:03.907 asserts 152 152 152 0 n/a 00:40:03.907 00:40:03.907 Elapsed time = 1.251 seconds 00:40:03.907 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:03.907 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:03.907 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:03.907 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:03.907 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:40:03.907 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:40:03.907 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:03.907 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:40:03.907 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:03.907 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:40:03.907 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:03.907 14:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:03.907 rmmod nvme_tcp 00:40:03.907 rmmod nvme_fabrics 00:40:04.168 rmmod nvme_keyring 00:40:04.168 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:04.168 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:40:04.168 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:40:04.168 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3714866 ']' 00:40:04.168 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3714866 00:40:04.168 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3714866 ']' 00:40:04.168 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3714866 00:40:04.168 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:40:04.168 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:04.168 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3714866 00:40:04.168 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:40:04.168 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:40:04.168 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3714866' 00:40:04.168 killing process with pid 3714866 00:40:04.168 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3714866 00:40:04.168 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3714866 00:40:04.428 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:04.428 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:04.428 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:04.428 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:40:04.428 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:40:04.428 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:04.428 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:40:04.428 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:04.428 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:04.428 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:04.428 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:04.428 14:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:06.343 14:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:06.343 00:40:06.343 real 0m12.488s 00:40:06.343 user 0m10.543s 00:40:06.343 sys 0m6.538s 00:40:06.343 14:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:06.343 14:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:06.343 ************************************ 00:40:06.343 END TEST nvmf_bdevio 00:40:06.343 ************************************ 00:40:06.343 14:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:40:06.343 00:40:06.343 real 5m0.873s 00:40:06.343 user 10m21.469s 00:40:06.343 sys 2m5.556s 00:40:06.343 14:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:06.343 14:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:06.343 ************************************ 00:40:06.343 END TEST nvmf_target_core_interrupt_mode 00:40:06.343 ************************************ 00:40:06.604 14:38:11 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:06.604 14:38:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:06.604 14:38:11 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:06.604 14:38:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:06.604 ************************************ 00:40:06.604 START TEST nvmf_interrupt 00:40:06.604 ************************************ 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:06.604 * Looking for test storage... 00:40:06.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:06.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:06.604 --rc genhtml_branch_coverage=1 00:40:06.604 --rc genhtml_function_coverage=1 00:40:06.604 --rc genhtml_legend=1 00:40:06.604 --rc geninfo_all_blocks=1 00:40:06.604 --rc geninfo_unexecuted_blocks=1 00:40:06.604 00:40:06.604 ' 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:06.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:06.604 --rc genhtml_branch_coverage=1 00:40:06.604 --rc genhtml_function_coverage=1 00:40:06.604 --rc genhtml_legend=1 00:40:06.604 --rc geninfo_all_blocks=1 00:40:06.604 --rc geninfo_unexecuted_blocks=1 00:40:06.604 00:40:06.604 ' 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:06.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:06.604 --rc genhtml_branch_coverage=1 00:40:06.604 --rc genhtml_function_coverage=1 00:40:06.604 --rc genhtml_legend=1 00:40:06.604 --rc geninfo_all_blocks=1 00:40:06.604 --rc geninfo_unexecuted_blocks=1 00:40:06.604 00:40:06.604 ' 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:06.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:06.604 --rc genhtml_branch_coverage=1 00:40:06.604 --rc genhtml_function_coverage=1 00:40:06.604 --rc genhtml_legend=1 00:40:06.604 --rc geninfo_all_blocks=1 00:40:06.604 --rc geninfo_unexecuted_blocks=1 00:40:06.604 00:40:06.604 ' 00:40:06.604 14:38:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:40:06.866 14:38:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:15.011 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:15.011 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:15.011 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:15.011 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:15.011 14:38:18 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:15.011 14:38:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:15.011 14:38:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:15.011 14:38:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:15.011 14:38:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:15.011 14:38:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:15.011 14:38:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:15.011 14:38:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:15.012 14:38:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:15.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:15.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.512 ms 00:40:15.012 00:40:15.012 --- 10.0.0.2 ping statistics --- 00:40:15.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:15.012 rtt min/avg/max/mdev = 0.512/0.512/0.512/0.000 ms 00:40:15.012 14:38:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:15.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:15.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:40:15.012 00:40:15.012 --- 10.0.0.1 ping statistics --- 00:40:15.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:15.012 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:40:15.012 14:38:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:15.012 14:38:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:40:15.012 14:38:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:15.012 14:38:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:15.012 14:38:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:15.012 14:38:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:15.012 14:38:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:15.012 14:38:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:15.012 14:38:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:15.012 14:38:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:40:15.012 14:38:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:15.012 14:38:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:15.012 14:38:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:15.012 14:38:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3719559 00:40:15.012 14:38:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3719559 00:40:15.012 14:38:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:40:15.012 14:38:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3719559 ']' 00:40:15.012 14:38:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:15.012 14:38:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:15.012 14:38:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:15.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:15.012 14:38:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:15.012 14:38:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:15.012 [2024-11-25 14:38:19.316616] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:15.012 [2024-11-25 14:38:19.317759] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:40:15.012 [2024-11-25 14:38:19.317815] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:15.012 [2024-11-25 14:38:19.418272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:15.012 [2024-11-25 14:38:19.470056] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:15.012 [2024-11-25 14:38:19.470109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:15.012 [2024-11-25 14:38:19.470117] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:15.012 [2024-11-25 14:38:19.470124] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:15.012 [2024-11-25 14:38:19.470131] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:15.012 [2024-11-25 14:38:19.471742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:15.012 [2024-11-25 14:38:19.471746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:15.012 [2024-11-25 14:38:19.547917] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:15.012 [2024-11-25 14:38:19.548510] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:15.012 [2024-11-25 14:38:19.548810] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:15.274 14:38:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:15.274 14:38:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:40:15.274 14:38:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:40:15.275 5000+0 records in 00:40:15.275 5000+0 records out 00:40:15.275 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0204679 s, 500 MB/s 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:15.275 AIO0 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:15.275 [2024-11-25 14:38:20.256803] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:15.275 [2024-11-25 14:38:20.301254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3719559 0 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3719559 0 idle 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3719559 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3719559 -w 256 00:40:15.275 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:15.537 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3719559 root 20 0 128.2g 43776 32256 S 6.7 0.0 0:00.32 reactor_0' 00:40:15.537 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3719559 root 20 0 128.2g 43776 32256 S 6.7 0.0 0:00.32 reactor_0 00:40:15.537 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:15.537 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:15.537 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:40:15.537 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:40:15.537 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:15.537 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:15.537 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:15.537 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:15.537 14:38:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:15.537 14:38:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3719559 1 00:40:15.537 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3719559 1 idle 00:40:15.537 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3719559 00:40:15.537 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:15.537 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:15.537 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:15.537 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:15.537 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:15.537 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:15.537 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:15.537 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:15.537 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:15.537 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3719559 -w 256 00:40:15.538 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3719563 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3719563 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3719921 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3719559 0 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3719559 0 busy 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3719559 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3719559 -w 256 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3719559 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.48 reactor_0' 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3719559 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.48 reactor_0 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3719559 1 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3719559 1 busy 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3719559 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:15.800 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3719559 -w 256 00:40:15.801 14:38:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:16.062 14:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3719563 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.27 reactor_1' 00:40:16.062 14:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3719563 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.27 reactor_1 00:40:16.062 14:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:16.062 14:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:16.062 14:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:40:16.062 14:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:40:16.062 14:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:16.062 14:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:16.062 14:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:16.062 14:38:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:16.062 14:38:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3719921 00:40:26.069 Initializing NVMe Controllers 00:40:26.069 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:26.069 Controller IO queue size 256, less than required. 00:40:26.069 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:26.069 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:26.069 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:26.069 Initialization complete. Launching workers. 00:40:26.069 ======================================================== 00:40:26.069 Latency(us) 00:40:26.069 Device Information : IOPS MiB/s Average min max 00:40:26.069 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19258.60 75.23 13296.54 3915.75 33682.64 00:40:26.069 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19495.00 76.15 13132.48 8243.77 27723.96 00:40:26.069 ======================================================== 00:40:26.069 Total : 38753.60 151.38 13214.01 3915.75 33682.64 00:40:26.069 00:40:26.069 14:38:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:26.069 14:38:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3719559 0 00:40:26.069 14:38:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3719559 0 idle 00:40:26.069 14:38:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3719559 00:40:26.069 14:38:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:26.069 14:38:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:26.069 14:38:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:26.069 14:38:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:26.069 14:38:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:26.069 14:38:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:26.069 14:38:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:26.069 14:38:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:26.069 14:38:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:26.069 14:38:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3719559 -w 256 00:40:26.069 14:38:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:26.069 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3719559 root 20 0 128.2g 44928 32256 R 6.7 0.0 0:20.30 reactor_0' 00:40:26.069 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3719559 root 20 0 128.2g 44928 32256 R 6.7 0.0 0:20.30 reactor_0 00:40:26.069 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:26.069 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:26.069 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:40:26.069 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:40:26.069 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:26.069 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:26.069 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:26.069 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:26.069 14:38:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:26.069 14:38:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3719559 1 00:40:26.069 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3719559 1 idle 00:40:26.069 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3719559 00:40:26.069 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:26.069 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:26.069 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:26.069 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:26.069 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:26.069 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:26.069 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:26.069 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:26.069 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:26.069 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3719559 -w 256 00:40:26.069 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:26.331 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3719563 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:40:26.331 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3719563 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:40:26.331 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:26.331 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:26.331 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:26.331 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:26.331 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:26.331 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:26.331 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:26.331 14:38:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:26.331 14:38:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:26.904 14:38:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:40:26.904 14:38:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:40:26.904 14:38:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:26.904 14:38:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:26.904 14:38:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:40:29.454 14:38:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:29.454 14:38:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:29.454 14:38:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:29.454 14:38:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:29.454 14:38:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:29.454 14:38:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:40:29.454 14:38:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:29.454 14:38:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3719559 0 00:40:29.454 14:38:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3719559 0 idle 00:40:29.454 14:38:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3719559 00:40:29.454 14:38:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:29.454 14:38:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:29.454 14:38:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:29.454 14:38:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:29.454 14:38:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:29.454 14:38:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:29.454 14:38:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:29.454 14:38:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:29.454 14:38:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:29.454 14:38:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3719559 -w 256 00:40:29.454 14:38:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:29.454 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3719559 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.67 reactor_0' 00:40:29.454 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3719559 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.67 reactor_0 00:40:29.454 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:29.454 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:29.454 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:29.454 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:29.454 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:29.454 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:29.454 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3719559 1 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3719559 1 idle 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3719559 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3719559 -w 256 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3719563 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1' 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3719563 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:29.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:29.455 14:38:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:29.455 rmmod nvme_tcp 00:40:29.455 rmmod nvme_fabrics 00:40:29.455 rmmod nvme_keyring 00:40:29.716 14:38:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:29.716 14:38:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:40:29.716 14:38:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:40:29.716 14:38:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3719559 ']' 00:40:29.716 14:38:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3719559 00:40:29.716 14:38:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3719559 ']' 00:40:29.716 14:38:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3719559 00:40:29.716 14:38:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:40:29.716 14:38:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:29.716 14:38:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3719559 00:40:29.716 14:38:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:29.716 14:38:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:29.716 14:38:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3719559' 00:40:29.716 killing process with pid 3719559 00:40:29.716 14:38:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3719559 00:40:29.716 14:38:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3719559 00:40:29.977 14:38:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:29.977 14:38:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:29.977 14:38:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:29.977 14:38:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:40:29.977 14:38:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:40:29.977 14:38:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:29.977 14:38:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:40:29.977 14:38:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:29.977 14:38:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:29.977 14:38:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:29.977 14:38:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:29.977 14:38:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:31.891 14:38:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:31.891 00:40:31.891 real 0m25.405s 00:40:31.891 user 0m40.241s 00:40:31.891 sys 0m9.863s 00:40:31.891 14:38:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:31.891 14:38:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:31.891 ************************************ 00:40:31.891 END TEST nvmf_interrupt 00:40:31.891 ************************************ 00:40:31.891 00:40:31.891 real 30m10.061s 00:40:31.891 user 61m33.533s 00:40:31.891 sys 10m18.154s 00:40:31.891 14:38:36 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:31.891 14:38:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:31.891 ************************************ 00:40:31.891 END TEST nvmf_tcp 00:40:31.891 ************************************ 00:40:31.891 14:38:36 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:40:31.891 14:38:36 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:31.891 14:38:36 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:31.891 14:38:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:31.891 14:38:36 -- common/autotest_common.sh@10 -- # set +x 00:40:32.153 ************************************ 00:40:32.153 START TEST spdkcli_nvmf_tcp 00:40:32.153 ************************************ 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:32.153 * Looking for test storage... 00:40:32.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:32.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:32.153 --rc genhtml_branch_coverage=1 00:40:32.153 --rc genhtml_function_coverage=1 00:40:32.153 --rc genhtml_legend=1 00:40:32.153 --rc geninfo_all_blocks=1 00:40:32.153 --rc geninfo_unexecuted_blocks=1 00:40:32.153 00:40:32.153 ' 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:32.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:32.153 --rc genhtml_branch_coverage=1 00:40:32.153 --rc genhtml_function_coverage=1 00:40:32.153 --rc genhtml_legend=1 00:40:32.153 --rc geninfo_all_blocks=1 00:40:32.153 --rc geninfo_unexecuted_blocks=1 00:40:32.153 00:40:32.153 ' 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:32.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:32.153 --rc genhtml_branch_coverage=1 00:40:32.153 --rc genhtml_function_coverage=1 00:40:32.153 --rc genhtml_legend=1 00:40:32.153 --rc geninfo_all_blocks=1 00:40:32.153 --rc geninfo_unexecuted_blocks=1 00:40:32.153 00:40:32.153 ' 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:32.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:32.153 --rc genhtml_branch_coverage=1 00:40:32.153 --rc genhtml_function_coverage=1 00:40:32.153 --rc genhtml_legend=1 00:40:32.153 --rc geninfo_all_blocks=1 00:40:32.153 --rc geninfo_unexecuted_blocks=1 00:40:32.153 00:40:32.153 ' 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:40:32.153 14:38:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:40:32.154 14:38:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:40:32.154 14:38:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:32.154 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:40:32.154 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:32.154 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:32.154 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:32.154 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:32.154 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:32.154 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:32.154 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:32.154 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:32.154 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:32.154 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:32.154 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:32.154 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:32.154 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:32.154 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:32.154 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:32.154 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:32.154 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:32.154 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:40:32.154 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:32.154 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:32.415 14:38:37 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:32.415 14:38:37 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:32.415 14:38:37 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:32.415 14:38:37 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:32.415 14:38:37 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:40:32.415 14:38:37 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:32.415 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:40:32.415 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:32.415 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:32.415 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:32.415 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:32.415 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:32.415 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:32.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:32.415 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:32.415 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:32.415 14:38:37 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:32.415 14:38:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:40:32.415 14:38:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:40:32.415 14:38:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:40:32.415 14:38:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:40:32.415 14:38:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:32.415 14:38:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:32.415 14:38:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:40:32.415 14:38:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3723098 00:40:32.415 14:38:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3723098 00:40:32.416 14:38:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3723098 ']' 00:40:32.416 14:38:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:40:32.416 14:38:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:32.416 14:38:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:32.416 14:38:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:32.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:32.416 14:38:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:32.416 14:38:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:32.416 [2024-11-25 14:38:37.310813] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:40:32.416 [2024-11-25 14:38:37.310880] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3723098 ] 00:40:32.416 [2024-11-25 14:38:37.403673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:32.416 [2024-11-25 14:38:37.458132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:32.416 [2024-11-25 14:38:37.458138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:33.357 14:38:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:33.357 14:38:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:40:33.357 14:38:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:40:33.357 14:38:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:33.357 14:38:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:33.357 14:38:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:40:33.357 14:38:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:40:33.357 14:38:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:40:33.357 14:38:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:33.357 14:38:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:33.357 14:38:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:40:33.357 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:40:33.357 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:40:33.357 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:40:33.357 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:40:33.357 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:40:33.357 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:40:33.357 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:33.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:40:33.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:40:33.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:33.357 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:33.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:40:33.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:33.357 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:33.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:40:33.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:33.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:33.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:33.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:33.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:40:33.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:40:33.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:33.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:40:33.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:33.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:40:33.357 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:40:33.357 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:40:33.357 ' 00:40:35.898 [2024-11-25 14:38:40.851566] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:37.280 [2024-11-25 14:38:42.215796] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:40:39.833 [2024-11-25 14:38:44.746816] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:40:42.376 [2024-11-25 14:38:46.973176] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:40:43.763 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:40:43.763 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:40:43.763 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:40:43.763 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:40:43.763 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:40:43.763 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:40:43.763 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:40:43.763 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:43.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:40:43.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:40:43.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:43.763 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:43.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:40:43.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:43.763 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:43.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:40:43.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:43.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:43.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:43.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:43.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:40:43.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:40:43.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:43.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:40:43.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:43.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:40:43.763 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:40:43.763 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:40:43.763 14:38:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:40:43.763 14:38:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:43.763 14:38:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:43.763 14:38:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:40:43.763 14:38:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:43.763 14:38:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:43.763 14:38:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:40:43.763 14:38:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:40:44.348 14:38:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:40:44.348 14:38:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:40:44.348 14:38:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:40:44.348 14:38:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:44.348 14:38:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:44.348 14:38:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:40:44.348 14:38:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:44.348 14:38:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:44.348 14:38:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:40:44.348 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:40:44.348 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:44.348 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:40:44.348 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:40:44.348 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:40:44.348 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:40:44.348 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:44.348 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:40:44.348 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:40:44.348 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:40:44.348 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:40:44.348 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:40:44.348 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:40:44.348 ' 00:40:51.030 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:40:51.030 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:40:51.030 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:51.030 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:40:51.030 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:40:51.030 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:40:51.030 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:40:51.030 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:51.030 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:40:51.030 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:40:51.030 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:40:51.030 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:40:51.030 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:40:51.030 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:40:51.030 14:38:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:40:51.030 14:38:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:51.030 14:38:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:51.030 14:38:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3723098 00:40:51.030 14:38:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3723098 ']' 00:40:51.030 14:38:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3723098 00:40:51.030 14:38:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:40:51.030 14:38:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:51.030 14:38:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3723098 00:40:51.030 14:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:51.030 14:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:51.030 14:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3723098' 00:40:51.030 killing process with pid 3723098 00:40:51.030 14:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3723098 00:40:51.030 14:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3723098 00:40:51.030 14:38:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:40:51.030 14:38:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:40:51.030 14:38:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3723098 ']' 00:40:51.030 14:38:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3723098 00:40:51.030 14:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3723098 ']' 00:40:51.030 14:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3723098 00:40:51.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3723098) - No such process 00:40:51.030 14:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3723098 is not found' 00:40:51.030 Process with pid 3723098 is not found 00:40:51.030 14:38:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:40:51.030 14:38:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:40:51.030 14:38:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:40:51.030 00:40:51.030 real 0m18.149s 00:40:51.030 user 0m40.321s 00:40:51.030 sys 0m0.892s 00:40:51.030 14:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:51.030 14:38:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:51.030 ************************************ 00:40:51.030 END TEST spdkcli_nvmf_tcp 00:40:51.030 ************************************ 00:40:51.030 14:38:55 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:51.030 14:38:55 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:51.030 14:38:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:51.030 14:38:55 -- common/autotest_common.sh@10 -- # set +x 00:40:51.030 ************************************ 00:40:51.030 START TEST nvmf_identify_passthru 00:40:51.030 ************************************ 00:40:51.030 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:51.030 * Looking for test storage... 00:40:51.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:51.030 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:51.030 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:40:51.030 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:51.030 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:51.030 14:38:55 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:40:51.030 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:51.030 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:51.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:51.030 --rc genhtml_branch_coverage=1 00:40:51.030 --rc genhtml_function_coverage=1 00:40:51.030 --rc genhtml_legend=1 00:40:51.030 --rc geninfo_all_blocks=1 00:40:51.030 --rc geninfo_unexecuted_blocks=1 00:40:51.030 00:40:51.030 ' 00:40:51.030 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:51.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:51.030 --rc genhtml_branch_coverage=1 00:40:51.030 --rc genhtml_function_coverage=1 00:40:51.030 --rc genhtml_legend=1 00:40:51.030 --rc geninfo_all_blocks=1 00:40:51.030 --rc geninfo_unexecuted_blocks=1 00:40:51.030 00:40:51.030 ' 00:40:51.030 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:51.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:51.031 --rc genhtml_branch_coverage=1 00:40:51.031 --rc genhtml_function_coverage=1 00:40:51.031 --rc genhtml_legend=1 00:40:51.031 --rc geninfo_all_blocks=1 00:40:51.031 --rc geninfo_unexecuted_blocks=1 00:40:51.031 00:40:51.031 ' 00:40:51.031 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:51.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:51.031 --rc genhtml_branch_coverage=1 00:40:51.031 --rc genhtml_function_coverage=1 00:40:51.031 --rc genhtml_legend=1 00:40:51.031 --rc geninfo_all_blocks=1 00:40:51.031 --rc geninfo_unexecuted_blocks=1 00:40:51.031 00:40:51.031 ' 00:40:51.031 14:38:55 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:51.031 14:38:55 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:51.031 14:38:55 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:51.031 14:38:55 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:51.031 14:38:55 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:51.031 14:38:55 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.031 14:38:55 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.031 14:38:55 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.031 14:38:55 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:51.031 14:38:55 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:51.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:51.031 14:38:55 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:51.031 14:38:55 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:51.031 14:38:55 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:51.031 14:38:55 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:51.031 14:38:55 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:51.031 14:38:55 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.031 14:38:55 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.031 14:38:55 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.031 14:38:55 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:51.031 14:38:55 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:51.031 14:38:55 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:51.031 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:40:51.031 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:51.031 14:38:55 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:40:51.031 14:38:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:57.618 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:57.618 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:40:57.618 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:57.618 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:57.618 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:57.618 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:57.619 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:57.619 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:57.619 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:57.619 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:57.619 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:57.880 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:57.880 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:57.880 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:57.880 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:57.880 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:57.880 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:57.880 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:57.880 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:57.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:57.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:40:57.880 00:40:57.880 --- 10.0.0.2 ping statistics --- 00:40:57.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:57.880 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:40:57.880 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:57.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:57.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:40:57.880 00:40:57.880 --- 10.0.0.1 ping statistics --- 00:40:57.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:57.880 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:40:57.880 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:57.880 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:40:57.881 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:57.881 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:57.881 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:57.881 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:57.881 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:57.881 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:57.881 14:39:02 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:58.141 14:39:03 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:40:58.141 14:39:03 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:58.141 14:39:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:58.141 14:39:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:40:58.141 14:39:03 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:40:58.141 14:39:03 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:40:58.141 14:39:03 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:40:58.141 14:39:03 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:40:58.141 14:39:03 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:40:58.141 14:39:03 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:40:58.141 14:39:03 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:58.141 14:39:03 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:58.141 14:39:03 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:40:58.141 14:39:03 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:40:58.141 14:39:03 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:40:58.141 14:39:03 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:40:58.141 14:39:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:40:58.141 14:39:03 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:40:58.141 14:39:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:40:58.141 14:39:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:40:58.141 14:39:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:40:58.712 14:39:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:40:58.712 14:39:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:40:58.712 14:39:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:40:58.712 14:39:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:40:59.284 14:39:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:40:59.284 14:39:04 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:40:59.284 14:39:04 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:59.284 14:39:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:59.284 14:39:04 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:40:59.284 14:39:04 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:59.284 14:39:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:59.284 14:39:04 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3730621 00:40:59.284 14:39:04 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:59.284 14:39:04 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:59.284 14:39:04 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3730621 00:40:59.284 14:39:04 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3730621 ']' 00:40:59.284 14:39:04 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:59.284 14:39:04 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:59.284 14:39:04 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:59.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:59.284 14:39:04 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:59.284 14:39:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:59.284 [2024-11-25 14:39:04.240722] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:40:59.284 [2024-11-25 14:39:04.240792] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:59.284 [2024-11-25 14:39:04.340788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:59.545 [2024-11-25 14:39:04.395143] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:59.545 [2024-11-25 14:39:04.395204] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:59.545 [2024-11-25 14:39:04.395213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:59.545 [2024-11-25 14:39:04.395220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:59.545 [2024-11-25 14:39:04.395227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:59.545 [2024-11-25 14:39:04.397681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:59.545 [2024-11-25 14:39:04.397815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:59.545 [2024-11-25 14:39:04.397977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:59.545 [2024-11-25 14:39:04.397977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:00.118 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:00.118 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:41:00.118 14:39:05 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:41:00.118 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.118 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:00.118 INFO: Log level set to 20 00:41:00.118 INFO: Requests: 00:41:00.118 { 00:41:00.118 "jsonrpc": "2.0", 00:41:00.118 "method": "nvmf_set_config", 00:41:00.118 "id": 1, 00:41:00.118 "params": { 00:41:00.118 "admin_cmd_passthru": { 00:41:00.118 "identify_ctrlr": true 00:41:00.118 } 00:41:00.118 } 00:41:00.118 } 00:41:00.118 00:41:00.118 INFO: response: 00:41:00.118 { 00:41:00.118 "jsonrpc": "2.0", 00:41:00.118 "id": 1, 00:41:00.118 "result": true 00:41:00.118 } 00:41:00.118 00:41:00.118 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.118 14:39:05 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:41:00.118 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.118 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:00.118 INFO: Setting log level to 20 00:41:00.118 INFO: Setting log level to 20 00:41:00.118 INFO: Log level set to 20 00:41:00.118 INFO: Log level set to 20 00:41:00.118 INFO: Requests: 00:41:00.118 { 00:41:00.118 "jsonrpc": "2.0", 00:41:00.118 "method": "framework_start_init", 00:41:00.118 "id": 1 00:41:00.118 } 00:41:00.118 00:41:00.118 INFO: Requests: 00:41:00.118 { 00:41:00.118 "jsonrpc": "2.0", 00:41:00.118 "method": "framework_start_init", 00:41:00.118 "id": 1 00:41:00.118 } 00:41:00.118 00:41:00.118 [2024-11-25 14:39:05.164416] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:41:00.118 INFO: response: 00:41:00.118 { 00:41:00.118 "jsonrpc": "2.0", 00:41:00.118 "id": 1, 00:41:00.118 "result": true 00:41:00.118 } 00:41:00.118 00:41:00.118 INFO: response: 00:41:00.118 { 00:41:00.118 "jsonrpc": "2.0", 00:41:00.118 "id": 1, 00:41:00.118 "result": true 00:41:00.118 } 00:41:00.118 00:41:00.118 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.118 14:39:05 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:00.118 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.118 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:00.118 INFO: Setting log level to 40 00:41:00.118 INFO: Setting log level to 40 00:41:00.118 INFO: Setting log level to 40 00:41:00.118 [2024-11-25 14:39:05.177989] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:00.118 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.118 14:39:05 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:41:00.118 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:00.118 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:00.379 14:39:05 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:41:00.379 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.379 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:00.639 Nvme0n1 00:41:00.639 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.639 14:39:05 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:41:00.639 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.639 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:00.639 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.639 14:39:05 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:41:00.639 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.639 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:00.639 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.639 14:39:05 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:00.639 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.639 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:00.639 [2024-11-25 14:39:05.581470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:00.639 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.639 14:39:05 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:41:00.639 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:00.639 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:00.639 [ 00:41:00.639 { 00:41:00.639 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:41:00.639 "subtype": "Discovery", 00:41:00.639 "listen_addresses": [], 00:41:00.639 "allow_any_host": true, 00:41:00.639 "hosts": [] 00:41:00.639 }, 00:41:00.639 { 00:41:00.639 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:41:00.639 "subtype": "NVMe", 00:41:00.639 "listen_addresses": [ 00:41:00.639 { 00:41:00.639 "trtype": "TCP", 00:41:00.640 "adrfam": "IPv4", 00:41:00.640 "traddr": "10.0.0.2", 00:41:00.640 "trsvcid": "4420" 00:41:00.640 } 00:41:00.640 ], 00:41:00.640 "allow_any_host": true, 00:41:00.640 "hosts": [], 00:41:00.640 "serial_number": "SPDK00000000000001", 00:41:00.640 "model_number": "SPDK bdev Controller", 00:41:00.640 "max_namespaces": 1, 00:41:00.640 "min_cntlid": 1, 00:41:00.640 "max_cntlid": 65519, 00:41:00.640 "namespaces": [ 00:41:00.640 { 00:41:00.640 "nsid": 1, 00:41:00.640 "bdev_name": "Nvme0n1", 00:41:00.640 "name": "Nvme0n1", 00:41:00.640 "nguid": "36344730526054870025384500000044", 00:41:00.640 "uuid": "36344730-5260-5487-0025-384500000044" 00:41:00.640 } 00:41:00.640 ] 00:41:00.640 } 00:41:00.640 ] 00:41:00.640 14:39:05 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:00.640 14:39:05 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:00.640 14:39:05 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:41:00.640 14:39:05 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:41:00.900 14:39:05 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:41:00.900 14:39:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:00.900 14:39:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:41:00.900 14:39:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:41:01.161 14:39:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:41:01.161 14:39:06 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:41:01.161 14:39:06 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:41:01.161 14:39:06 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:01.161 14:39:06 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:01.161 14:39:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:01.161 14:39:06 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:01.161 14:39:06 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:41:01.161 14:39:06 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:41:01.161 14:39:06 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:01.161 14:39:06 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:41:01.161 14:39:06 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:01.161 14:39:06 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:41:01.161 14:39:06 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:01.161 14:39:06 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:01.161 rmmod nvme_tcp 00:41:01.161 rmmod nvme_fabrics 00:41:01.161 rmmod nvme_keyring 00:41:01.161 14:39:06 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:01.161 14:39:06 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:41:01.161 14:39:06 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:41:01.161 14:39:06 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3730621 ']' 00:41:01.161 14:39:06 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3730621 00:41:01.161 14:39:06 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3730621 ']' 00:41:01.162 14:39:06 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3730621 00:41:01.162 14:39:06 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:41:01.162 14:39:06 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:01.162 14:39:06 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3730621 00:41:01.162 14:39:06 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:01.162 14:39:06 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:01.162 14:39:06 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3730621' 00:41:01.162 killing process with pid 3730621 00:41:01.162 14:39:06 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3730621 00:41:01.162 14:39:06 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3730621 00:41:01.422 14:39:06 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:01.422 14:39:06 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:01.422 14:39:06 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:01.422 14:39:06 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:41:01.422 14:39:06 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:41:01.422 14:39:06 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:01.422 14:39:06 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:41:01.683 14:39:06 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:01.683 14:39:06 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:01.683 14:39:06 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:01.683 14:39:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:01.683 14:39:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:03.596 14:39:08 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:03.596 00:41:03.596 real 0m13.352s 00:41:03.596 user 0m10.590s 00:41:03.596 sys 0m6.859s 00:41:03.596 14:39:08 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:03.596 14:39:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:03.596 ************************************ 00:41:03.596 END TEST nvmf_identify_passthru 00:41:03.596 ************************************ 00:41:03.596 14:39:08 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:03.596 14:39:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:03.596 14:39:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:03.596 14:39:08 -- common/autotest_common.sh@10 -- # set +x 00:41:03.596 ************************************ 00:41:03.596 START TEST nvmf_dif 00:41:03.596 ************************************ 00:41:03.596 14:39:08 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:03.857 * Looking for test storage... 00:41:03.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:03.857 14:39:08 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:03.857 14:39:08 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:41:03.857 14:39:08 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:03.857 14:39:08 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:03.857 14:39:08 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:03.857 14:39:08 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:03.857 14:39:08 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:03.857 14:39:08 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:41:03.857 14:39:08 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:41:03.857 14:39:08 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:41:03.857 14:39:08 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:41:03.857 14:39:08 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:41:03.857 14:39:08 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:41:03.857 14:39:08 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:41:03.857 14:39:08 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:03.857 14:39:08 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:41:03.857 14:39:08 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:41:03.857 14:39:08 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:03.857 14:39:08 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:03.857 14:39:08 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:41:03.857 14:39:08 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:41:03.857 14:39:08 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:03.857 14:39:08 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:41:03.857 14:39:08 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:41:03.858 14:39:08 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:41:03.858 14:39:08 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:41:03.858 14:39:08 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:03.858 14:39:08 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:41:03.858 14:39:08 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:41:03.858 14:39:08 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:03.858 14:39:08 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:03.858 14:39:08 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:41:03.858 14:39:08 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:03.858 14:39:08 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:03.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:03.858 --rc genhtml_branch_coverage=1 00:41:03.858 --rc genhtml_function_coverage=1 00:41:03.858 --rc genhtml_legend=1 00:41:03.858 --rc geninfo_all_blocks=1 00:41:03.858 --rc geninfo_unexecuted_blocks=1 00:41:03.858 00:41:03.858 ' 00:41:03.858 14:39:08 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:03.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:03.858 --rc genhtml_branch_coverage=1 00:41:03.858 --rc genhtml_function_coverage=1 00:41:03.858 --rc genhtml_legend=1 00:41:03.858 --rc geninfo_all_blocks=1 00:41:03.858 --rc geninfo_unexecuted_blocks=1 00:41:03.858 00:41:03.858 ' 00:41:03.858 14:39:08 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:03.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:03.858 --rc genhtml_branch_coverage=1 00:41:03.858 --rc genhtml_function_coverage=1 00:41:03.858 --rc genhtml_legend=1 00:41:03.858 --rc geninfo_all_blocks=1 00:41:03.858 --rc geninfo_unexecuted_blocks=1 00:41:03.858 00:41:03.858 ' 00:41:03.858 14:39:08 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:03.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:03.858 --rc genhtml_branch_coverage=1 00:41:03.858 --rc genhtml_function_coverage=1 00:41:03.858 --rc genhtml_legend=1 00:41:03.858 --rc geninfo_all_blocks=1 00:41:03.858 --rc geninfo_unexecuted_blocks=1 00:41:03.858 00:41:03.858 ' 00:41:03.858 14:39:08 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:03.858 14:39:08 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:41:03.858 14:39:08 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:03.858 14:39:08 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:03.858 14:39:08 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:03.858 14:39:08 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:03.858 14:39:08 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:03.858 14:39:08 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:03.858 14:39:08 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:41:03.858 14:39:08 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:03.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:03.858 14:39:08 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:41:03.858 14:39:08 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:41:03.858 14:39:08 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:41:03.858 14:39:08 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:41:03.858 14:39:08 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:03.858 14:39:08 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:03.858 14:39:08 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:03.858 14:39:08 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:41:03.858 14:39:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:41:12.002 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:41:12.002 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:41:12.002 Found net devices under 0000:4b:00.0: cvl_0_0 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:41:12.002 Found net devices under 0000:4b:00.1: cvl_0_1 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:12.002 14:39:16 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:12.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:12.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:41:12.003 00:41:12.003 --- 10.0.0.2 ping statistics --- 00:41:12.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:12.003 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:12.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:12.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:41:12.003 00:41:12.003 --- 10.0.0.1 ping statistics --- 00:41:12.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:12.003 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:41:12.003 14:39:16 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:15.307 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:41:15.307 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:41:15.307 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:41:15.307 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:41:15.307 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:41:15.307 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:41:15.307 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:41:15.307 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:41:15.307 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:41:15.307 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:41:15.307 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:41:15.307 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:41:15.307 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:41:15.307 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:41:15.307 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:41:15.307 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:41:15.307 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:41:15.307 14:39:20 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:15.307 14:39:20 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:15.307 14:39:20 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:15.307 14:39:20 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:15.307 14:39:20 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:15.307 14:39:20 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:15.307 14:39:20 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:41:15.307 14:39:20 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:41:15.307 14:39:20 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:15.307 14:39:20 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:15.307 14:39:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:15.307 14:39:20 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3737192 00:41:15.307 14:39:20 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3737192 00:41:15.307 14:39:20 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:41:15.307 14:39:20 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3737192 ']' 00:41:15.307 14:39:20 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:15.307 14:39:20 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:15.307 14:39:20 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:15.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:15.307 14:39:20 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:15.307 14:39:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:15.307 [2024-11-25 14:39:20.380296] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:41:15.307 [2024-11-25 14:39:20.380362] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:15.569 [2024-11-25 14:39:20.478576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:15.569 [2024-11-25 14:39:20.530041] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:15.569 [2024-11-25 14:39:20.530092] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:15.569 [2024-11-25 14:39:20.530100] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:15.569 [2024-11-25 14:39:20.530107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:15.569 [2024-11-25 14:39:20.530113] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:15.569 [2024-11-25 14:39:20.530919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:16.140 14:39:21 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:16.140 14:39:21 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:41:16.140 14:39:21 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:16.140 14:39:21 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:16.140 14:39:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:16.400 14:39:21 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:16.400 14:39:21 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:41:16.401 14:39:21 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:41:16.401 14:39:21 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:16.401 14:39:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:16.401 [2024-11-25 14:39:21.256798] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:16.401 14:39:21 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:16.401 14:39:21 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:41:16.401 14:39:21 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:16.401 14:39:21 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:16.401 14:39:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:16.401 ************************************ 00:41:16.401 START TEST fio_dif_1_default 00:41:16.401 ************************************ 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:16.401 bdev_null0 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:16.401 [2024-11-25 14:39:21.345254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:16.401 { 00:41:16.401 "params": { 00:41:16.401 "name": "Nvme$subsystem", 00:41:16.401 "trtype": "$TEST_TRANSPORT", 00:41:16.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:16.401 "adrfam": "ipv4", 00:41:16.401 "trsvcid": "$NVMF_PORT", 00:41:16.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:16.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:16.401 "hdgst": ${hdgst:-false}, 00:41:16.401 "ddgst": ${ddgst:-false} 00:41:16.401 }, 00:41:16.401 "method": "bdev_nvme_attach_controller" 00:41:16.401 } 00:41:16.401 EOF 00:41:16.401 )") 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:16.401 "params": { 00:41:16.401 "name": "Nvme0", 00:41:16.401 "trtype": "tcp", 00:41:16.401 "traddr": "10.0.0.2", 00:41:16.401 "adrfam": "ipv4", 00:41:16.401 "trsvcid": "4420", 00:41:16.401 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:16.401 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:16.401 "hdgst": false, 00:41:16.401 "ddgst": false 00:41:16.401 }, 00:41:16.401 "method": "bdev_nvme_attach_controller" 00:41:16.401 }' 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:16.401 14:39:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:16.971 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:16.971 fio-3.35 00:41:16.971 Starting 1 thread 00:41:29.207 00:41:29.207 filename0: (groupid=0, jobs=1): err= 0: pid=3737773: Mon Nov 25 14:39:32 2024 00:41:29.207 read: IOPS=190, BW=762KiB/s (780kB/s)(7632KiB/10020msec) 00:41:29.207 slat (nsec): min=5451, max=68705, avg=6206.02, stdev=1937.85 00:41:29.207 clat (usec): min=487, max=43838, avg=20989.34, stdev=20205.35 00:41:29.207 lat (usec): min=492, max=43874, avg=20995.55, stdev=20205.32 00:41:29.207 clat percentiles (usec): 00:41:29.207 | 1.00th=[ 553], 5.00th=[ 775], 10.00th=[ 799], 20.00th=[ 816], 00:41:29.207 | 30.00th=[ 840], 40.00th=[ 898], 50.00th=[ 1074], 60.00th=[41157], 00:41:29.207 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:29.207 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:41:29.207 | 99.99th=[43779] 00:41:29.207 bw ( KiB/s): min= 704, max= 832, per=99.91%, avg=761.60, stdev=28.62, samples=20 00:41:29.207 iops : min= 176, max= 208, avg=190.40, stdev= 7.16, samples=20 00:41:29.207 lat (usec) : 500=0.31%, 750=2.52%, 1000=46.96% 00:41:29.207 lat (msec) : 2=0.31%, 50=49.90% 00:41:29.207 cpu : usr=93.41%, sys=6.38%, ctx=13, majf=0, minf=258 00:41:29.207 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:29.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:29.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:29.207 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:29.207 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:29.207 00:41:29.207 Run status group 0 (all jobs): 00:41:29.207 READ: bw=762KiB/s (780kB/s), 762KiB/s-762KiB/s (780kB/s-780kB/s), io=7632KiB (7815kB), run=10020-10020msec 00:41:29.207 14:39:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:41:29.207 14:39:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:41:29.207 14:39:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:41:29.207 14:39:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:29.207 14:39:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:41:29.207 14:39:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:29.207 14:39:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.207 14:39:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:29.207 14:39:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.207 14:39:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:29.207 14:39:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.207 14:39:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:29.207 14:39:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.207 00:41:29.207 real 0m11.170s 00:41:29.207 user 0m22.586s 00:41:29.207 sys 0m0.986s 00:41:29.207 14:39:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:29.207 14:39:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:29.207 ************************************ 00:41:29.207 END TEST fio_dif_1_default 00:41:29.207 ************************************ 00:41:29.207 14:39:32 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:41:29.207 14:39:32 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:29.208 14:39:32 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:29.208 14:39:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:29.208 ************************************ 00:41:29.208 START TEST fio_dif_1_multi_subsystems 00:41:29.208 ************************************ 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:29.208 bdev_null0 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:29.208 [2024-11-25 14:39:32.594216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:29.208 bdev_null1 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:29.208 { 00:41:29.208 "params": { 00:41:29.208 "name": "Nvme$subsystem", 00:41:29.208 "trtype": "$TEST_TRANSPORT", 00:41:29.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:29.208 "adrfam": "ipv4", 00:41:29.208 "trsvcid": "$NVMF_PORT", 00:41:29.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:29.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:29.208 "hdgst": ${hdgst:-false}, 00:41:29.208 "ddgst": ${ddgst:-false} 00:41:29.208 }, 00:41:29.208 "method": "bdev_nvme_attach_controller" 00:41:29.208 } 00:41:29.208 EOF 00:41:29.208 )") 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:29.208 { 00:41:29.208 "params": { 00:41:29.208 "name": "Nvme$subsystem", 00:41:29.208 "trtype": "$TEST_TRANSPORT", 00:41:29.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:29.208 "adrfam": "ipv4", 00:41:29.208 "trsvcid": "$NVMF_PORT", 00:41:29.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:29.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:29.208 "hdgst": ${hdgst:-false}, 00:41:29.208 "ddgst": ${ddgst:-false} 00:41:29.208 }, 00:41:29.208 "method": "bdev_nvme_attach_controller" 00:41:29.208 } 00:41:29.208 EOF 00:41:29.208 )") 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:29.208 "params": { 00:41:29.208 "name": "Nvme0", 00:41:29.208 "trtype": "tcp", 00:41:29.208 "traddr": "10.0.0.2", 00:41:29.208 "adrfam": "ipv4", 00:41:29.208 "trsvcid": "4420", 00:41:29.208 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:29.208 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:29.208 "hdgst": false, 00:41:29.208 "ddgst": false 00:41:29.208 }, 00:41:29.208 "method": "bdev_nvme_attach_controller" 00:41:29.208 },{ 00:41:29.208 "params": { 00:41:29.208 "name": "Nvme1", 00:41:29.208 "trtype": "tcp", 00:41:29.208 "traddr": "10.0.0.2", 00:41:29.208 "adrfam": "ipv4", 00:41:29.208 "trsvcid": "4420", 00:41:29.208 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:29.208 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:29.208 "hdgst": false, 00:41:29.208 "ddgst": false 00:41:29.208 }, 00:41:29.208 "method": "bdev_nvme_attach_controller" 00:41:29.208 }' 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:29.208 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:29.209 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:29.209 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:29.209 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:29.209 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:29.209 14:39:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:29.209 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:29.209 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:29.209 fio-3.35 00:41:29.209 Starting 2 threads 00:41:39.205 00:41:39.205 filename0: (groupid=0, jobs=1): err= 0: pid=3740007: Mon Nov 25 14:39:43 2024 00:41:39.205 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10020msec) 00:41:39.205 slat (nsec): min=5434, max=31434, avg=6370.80, stdev=1780.90 00:41:39.205 clat (usec): min=40727, max=42873, avg=41046.18, stdev=265.48 00:41:39.205 lat (usec): min=40732, max=42878, avg=41052.55, stdev=265.86 00:41:39.205 clat percentiles (usec): 00:41:39.205 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:39.205 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:39.205 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:41:39.205 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:41:39.205 | 99.99th=[42730] 00:41:39.205 bw ( KiB/s): min= 384, max= 416, per=33.70%, avg=388.80, stdev=11.72, samples=20 00:41:39.205 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:41:39.205 lat (msec) : 50=100.00% 00:41:39.205 cpu : usr=95.26%, sys=4.53%, ctx=14, majf=0, minf=118 00:41:39.205 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:39.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:39.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:39.205 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:39.205 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:39.205 filename1: (groupid=0, jobs=1): err= 0: pid=3740008: Mon Nov 25 14:39:43 2024 00:41:39.205 read: IOPS=190, BW=763KiB/s (781kB/s)(7632KiB/10003msec) 00:41:39.205 slat (nsec): min=5444, max=35245, avg=6334.72, stdev=1972.82 00:41:39.205 clat (usec): min=570, max=41846, avg=20952.36, stdev=20147.93 00:41:39.205 lat (usec): min=575, max=41852, avg=20958.70, stdev=20147.83 00:41:39.205 clat percentiles (usec): 00:41:39.205 | 1.00th=[ 701], 5.00th=[ 799], 10.00th=[ 824], 20.00th=[ 840], 00:41:39.205 | 30.00th=[ 857], 40.00th=[ 873], 50.00th=[ 1975], 60.00th=[41157], 00:41:39.205 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:39.205 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:41:39.205 | 99.99th=[41681] 00:41:39.205 bw ( KiB/s): min= 736, max= 768, per=66.36%, avg=764.63, stdev=10.09, samples=19 00:41:39.206 iops : min= 184, max= 192, avg=191.16, stdev= 2.52, samples=19 00:41:39.206 lat (usec) : 750=1.94%, 1000=46.80% 00:41:39.206 lat (msec) : 2=1.31%, 4=0.05%, 50=49.90% 00:41:39.206 cpu : usr=95.52%, sys=4.28%, ctx=8, majf=0, minf=143 00:41:39.206 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:39.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:39.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:39.206 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:39.206 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:39.206 00:41:39.206 Run status group 0 (all jobs): 00:41:39.206 READ: bw=1151KiB/s (1179kB/s), 390KiB/s-763KiB/s (399kB/s-781kB/s), io=11.3MiB (11.8MB), run=10003-10020msec 00:41:39.206 14:39:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:41:39.206 14:39:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:41:39.206 14:39:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:39.206 14:39:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:39.206 14:39:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:41:39.206 14:39:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:39.206 14:39:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.206 14:39:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:39.206 14:39:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.206 14:39:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:39.206 14:39:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.206 14:39:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:39.206 14:39:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.206 14:39:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:39.206 14:39:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:39.206 14:39:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:41:39.206 14:39:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:39.206 14:39:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.206 14:39:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:39.206 14:39:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.206 14:39:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:39.206 14:39:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.206 14:39:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:39.206 14:39:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.206 00:41:39.206 real 0m11.425s 00:41:39.206 user 0m35.449s 00:41:39.206 sys 0m1.220s 00:41:39.206 14:39:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:39.206 14:39:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:39.206 ************************************ 00:41:39.206 END TEST fio_dif_1_multi_subsystems 00:41:39.206 ************************************ 00:41:39.206 14:39:44 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:41:39.206 14:39:44 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:39.206 14:39:44 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:39.206 14:39:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:39.206 ************************************ 00:41:39.206 START TEST fio_dif_rand_params 00:41:39.206 ************************************ 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:39.206 bdev_null0 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:39.206 [2024-11-25 14:39:44.102997] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:39.206 14:39:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:39.206 { 00:41:39.206 "params": { 00:41:39.206 "name": "Nvme$subsystem", 00:41:39.206 "trtype": "$TEST_TRANSPORT", 00:41:39.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:39.206 "adrfam": "ipv4", 00:41:39.206 "trsvcid": "$NVMF_PORT", 00:41:39.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:39.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:39.206 "hdgst": ${hdgst:-false}, 00:41:39.206 "ddgst": ${ddgst:-false} 00:41:39.206 }, 00:41:39.206 "method": "bdev_nvme_attach_controller" 00:41:39.206 } 00:41:39.206 EOF 00:41:39.206 )") 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:39.207 "params": { 00:41:39.207 "name": "Nvme0", 00:41:39.207 "trtype": "tcp", 00:41:39.207 "traddr": "10.0.0.2", 00:41:39.207 "adrfam": "ipv4", 00:41:39.207 "trsvcid": "4420", 00:41:39.207 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:39.207 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:39.207 "hdgst": false, 00:41:39.207 "ddgst": false 00:41:39.207 }, 00:41:39.207 "method": "bdev_nvme_attach_controller" 00:41:39.207 }' 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:39.207 14:39:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:39.467 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:39.467 ... 00:41:39.467 fio-3.35 00:41:39.467 Starting 3 threads 00:41:46.049 00:41:46.049 filename0: (groupid=0, jobs=1): err= 0: pid=3742200: Mon Nov 25 14:39:50 2024 00:41:46.049 read: IOPS=307, BW=38.4MiB/s (40.3MB/s)(194MiB/5045msec) 00:41:46.049 slat (nsec): min=5469, max=31681, avg=6215.29, stdev=1484.07 00:41:46.049 clat (usec): min=5555, max=88979, avg=9727.90, stdev=6834.44 00:41:46.049 lat (usec): min=5561, max=88986, avg=9734.11, stdev=6834.61 00:41:46.049 clat percentiles (usec): 00:41:46.049 | 1.00th=[ 6325], 5.00th=[ 6980], 10.00th=[ 7242], 20.00th=[ 7701], 00:41:46.049 | 30.00th=[ 8029], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8979], 00:41:46.049 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9896], 95.00th=[10421], 00:41:46.049 | 99.00th=[49021], 99.50th=[49546], 99.90th=[50070], 99.95th=[88605], 00:41:46.049 | 99.99th=[88605] 00:41:46.049 bw ( KiB/s): min=22272, max=44544, per=32.51%, avg=39628.80, stdev=7024.13, samples=10 00:41:46.049 iops : min= 174, max= 348, avg=309.60, stdev=54.88, samples=10 00:41:46.049 lat (msec) : 10=91.55%, 20=5.68%, 50=2.65%, 100=0.13% 00:41:46.049 cpu : usr=94.51%, sys=5.25%, ctx=10, majf=0, minf=72 00:41:46.049 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:46.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:46.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:46.049 issued rwts: total=1550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:46.049 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:46.049 filename0: (groupid=0, jobs=1): err= 0: pid=3742201: Mon Nov 25 14:39:50 2024 00:41:46.049 read: IOPS=330, BW=41.3MiB/s (43.3MB/s)(209MiB/5045msec) 00:41:46.049 slat (nsec): min=5491, max=32014, avg=8253.91, stdev=1865.68 00:41:46.049 clat (usec): min=5072, max=49221, avg=9038.38, stdev=3282.47 00:41:46.050 lat (usec): min=5090, max=49227, avg=9046.63, stdev=3282.44 00:41:46.050 clat percentiles (usec): 00:41:46.050 | 1.00th=[ 5800], 5.00th=[ 6915], 10.00th=[ 7177], 20.00th=[ 7701], 00:41:46.050 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9241], 00:41:46.050 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10159], 95.00th=[10421], 00:41:46.050 | 99.00th=[11338], 99.50th=[45876], 99.90th=[48497], 99.95th=[49021], 00:41:46.050 | 99.99th=[49021] 00:41:46.050 bw ( KiB/s): min=37376, max=47360, per=34.99%, avg=42649.60, stdev=2580.40, samples=10 00:41:46.050 iops : min= 292, max= 370, avg=333.20, stdev=20.16, samples=10 00:41:46.050 lat (msec) : 10=87.95%, 20=11.39%, 50=0.66% 00:41:46.050 cpu : usr=94.83%, sys=4.92%, ctx=6, majf=0, minf=95 00:41:46.050 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:46.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:46.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:46.050 issued rwts: total=1668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:46.050 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:46.050 filename0: (groupid=0, jobs=1): err= 0: pid=3742202: Mon Nov 25 14:39:50 2024 00:41:46.050 read: IOPS=317, BW=39.6MiB/s (41.6MB/s)(198MiB/5003msec) 00:41:46.050 slat (nsec): min=8232, max=38729, avg=8750.45, stdev=1407.69 00:41:46.050 clat (usec): min=4094, max=52323, avg=9452.48, stdev=2244.94 00:41:46.050 lat (usec): min=4102, max=52354, avg=9461.23, stdev=2245.30 00:41:46.050 clat percentiles (usec): 00:41:46.050 | 1.00th=[ 5211], 5.00th=[ 6980], 10.00th=[ 7570], 20.00th=[ 8356], 00:41:46.050 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9896], 00:41:46.050 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10683], 95.00th=[10945], 00:41:46.050 | 99.00th=[11731], 99.50th=[11994], 99.90th=[52167], 99.95th=[52167], 00:41:46.050 | 99.99th=[52167] 00:41:46.050 bw ( KiB/s): min=39424, max=45056, per=33.49%, avg=40817.78, stdev=1736.80, samples=9 00:41:46.050 iops : min= 308, max= 352, avg=318.89, stdev=13.57, samples=9 00:41:46.050 lat (msec) : 10=66.33%, 20=33.48%, 100=0.19% 00:41:46.050 cpu : usr=94.80%, sys=4.96%, ctx=8, majf=0, minf=102 00:41:46.050 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:46.050 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:46.050 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:46.050 issued rwts: total=1586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:46.050 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:46.050 00:41:46.050 Run status group 0 (all jobs): 00:41:46.050 READ: bw=119MiB/s (125MB/s), 38.4MiB/s-41.3MiB/s (40.3MB/s-43.3MB/s), io=601MiB (630MB), run=5003-5045msec 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:46.050 bdev_null0 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:46.050 [2024-11-25 14:39:50.292470] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:46.050 bdev_null1 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:46.050 bdev_null2 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:46.050 14:39:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:46.051 { 00:41:46.051 "params": { 00:41:46.051 "name": "Nvme$subsystem", 00:41:46.051 "trtype": "$TEST_TRANSPORT", 00:41:46.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:46.051 "adrfam": "ipv4", 00:41:46.051 "trsvcid": "$NVMF_PORT", 00:41:46.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:46.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:46.051 "hdgst": ${hdgst:-false}, 00:41:46.051 "ddgst": ${ddgst:-false} 00:41:46.051 }, 00:41:46.051 "method": "bdev_nvme_attach_controller" 00:41:46.051 } 00:41:46.051 EOF 00:41:46.051 )") 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:46.051 { 00:41:46.051 "params": { 00:41:46.051 "name": "Nvme$subsystem", 00:41:46.051 "trtype": "$TEST_TRANSPORT", 00:41:46.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:46.051 "adrfam": "ipv4", 00:41:46.051 "trsvcid": "$NVMF_PORT", 00:41:46.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:46.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:46.051 "hdgst": ${hdgst:-false}, 00:41:46.051 "ddgst": ${ddgst:-false} 00:41:46.051 }, 00:41:46.051 "method": "bdev_nvme_attach_controller" 00:41:46.051 } 00:41:46.051 EOF 00:41:46.051 )") 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:46.051 { 00:41:46.051 "params": { 00:41:46.051 "name": "Nvme$subsystem", 00:41:46.051 "trtype": "$TEST_TRANSPORT", 00:41:46.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:46.051 "adrfam": "ipv4", 00:41:46.051 "trsvcid": "$NVMF_PORT", 00:41:46.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:46.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:46.051 "hdgst": ${hdgst:-false}, 00:41:46.051 "ddgst": ${ddgst:-false} 00:41:46.051 }, 00:41:46.051 "method": "bdev_nvme_attach_controller" 00:41:46.051 } 00:41:46.051 EOF 00:41:46.051 )") 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:46.051 "params": { 00:41:46.051 "name": "Nvme0", 00:41:46.051 "trtype": "tcp", 00:41:46.051 "traddr": "10.0.0.2", 00:41:46.051 "adrfam": "ipv4", 00:41:46.051 "trsvcid": "4420", 00:41:46.051 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:46.051 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:46.051 "hdgst": false, 00:41:46.051 "ddgst": false 00:41:46.051 }, 00:41:46.051 "method": "bdev_nvme_attach_controller" 00:41:46.051 },{ 00:41:46.051 "params": { 00:41:46.051 "name": "Nvme1", 00:41:46.051 "trtype": "tcp", 00:41:46.051 "traddr": "10.0.0.2", 00:41:46.051 "adrfam": "ipv4", 00:41:46.051 "trsvcid": "4420", 00:41:46.051 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:46.051 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:46.051 "hdgst": false, 00:41:46.051 "ddgst": false 00:41:46.051 }, 00:41:46.051 "method": "bdev_nvme_attach_controller" 00:41:46.051 },{ 00:41:46.051 "params": { 00:41:46.051 "name": "Nvme2", 00:41:46.051 "trtype": "tcp", 00:41:46.051 "traddr": "10.0.0.2", 00:41:46.051 "adrfam": "ipv4", 00:41:46.051 "trsvcid": "4420", 00:41:46.051 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:41:46.051 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:41:46.051 "hdgst": false, 00:41:46.051 "ddgst": false 00:41:46.051 }, 00:41:46.051 "method": "bdev_nvme_attach_controller" 00:41:46.051 }' 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:46.051 14:39:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:46.051 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:46.051 ... 00:41:46.051 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:46.051 ... 00:41:46.051 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:46.051 ... 00:41:46.051 fio-3.35 00:41:46.051 Starting 24 threads 00:41:58.272 00:41:58.272 filename0: (groupid=0, jobs=1): err= 0: pid=3743707: Mon Nov 25 14:40:01 2024 00:41:58.272 read: IOPS=716, BW=2867KiB/s (2935kB/s)(28.0MiB/10016msec) 00:41:58.272 slat (usec): min=5, max=102, avg=10.98, stdev= 9.55 00:41:58.272 clat (usec): min=1107, max=42757, avg=22238.70, stdev=4425.84 00:41:58.272 lat (usec): min=1125, max=42766, avg=22249.68, stdev=4425.21 00:41:58.272 clat percentiles (usec): 00:41:58.272 | 1.00th=[ 1532], 5.00th=[13698], 10.00th=[17171], 20.00th=[22414], 00:41:58.272 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23200], 60.00th=[23462], 00:41:58.272 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[24773], 00:41:58.272 | 99.00th=[30540], 99.50th=[35390], 99.90th=[42206], 99.95th=[42730], 00:41:58.272 | 99.99th=[42730] 00:41:58.272 bw ( KiB/s): min= 2688, max= 4272, per=4.30%, avg=2874.11, stdev=359.52, samples=19 00:41:58.272 iops : min= 672, max= 1068, avg=718.53, stdev=89.88, samples=19 00:41:58.272 lat (msec) : 2=1.52%, 4=0.57%, 10=0.82%, 20=10.39%, 50=86.70% 00:41:58.272 cpu : usr=98.78%, sys=0.90%, ctx=20, majf=0, minf=93 00:41:58.272 IO depths : 1=5.2%, 2=10.5%, 4=21.9%, 8=55.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:41:58.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.272 complete : 0=0.0%, 4=93.2%, 8=1.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.272 issued rwts: total=7178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.272 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:58.272 filename0: (groupid=0, jobs=1): err= 0: pid=3743708: Mon Nov 25 14:40:01 2024 00:41:58.272 read: IOPS=730, BW=2923KiB/s (2993kB/s)(28.6MiB/10013msec) 00:41:58.272 slat (usec): min=5, max=116, avg=14.06, stdev=12.49 00:41:58.272 clat (usec): min=6678, max=40826, avg=21808.54, stdev=4623.20 00:41:58.272 lat (usec): min=6686, max=40846, avg=21822.60, stdev=4625.15 00:41:58.272 clat percentiles (usec): 00:41:58.272 | 1.00th=[11600], 5.00th=[14091], 10.00th=[15401], 20.00th=[17695], 00:41:58.272 | 30.00th=[19792], 40.00th=[22152], 50.00th=[22938], 60.00th=[23200], 00:41:58.272 | 70.00th=[23725], 80.00th=[23987], 90.00th=[26346], 95.00th=[30016], 00:41:58.272 | 99.00th=[35914], 99.50th=[38011], 99.90th=[39584], 99.95th=[39584], 00:41:58.272 | 99.99th=[40633] 00:41:58.272 bw ( KiB/s): min= 2688, max= 3184, per=4.37%, avg=2917.89, stdev=118.56, samples=19 00:41:58.272 iops : min= 672, max= 796, avg=729.47, stdev=29.64, samples=19 00:41:58.272 lat (msec) : 10=0.67%, 20=29.81%, 50=69.52% 00:41:58.272 cpu : usr=98.89%, sys=0.79%, ctx=17, majf=0, minf=68 00:41:58.272 IO depths : 1=0.9%, 2=2.0%, 4=8.4%, 8=75.6%, 16=13.0%, 32=0.0%, >=64=0.0% 00:41:58.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.272 complete : 0=0.0%, 4=89.9%, 8=5.9%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.272 issued rwts: total=7316,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.272 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:58.272 filename0: (groupid=0, jobs=1): err= 0: pid=3743709: Mon Nov 25 14:40:01 2024 00:41:58.272 read: IOPS=698, BW=2793KiB/s (2860kB/s)(27.3MiB/10015msec) 00:41:58.272 slat (usec): min=5, max=141, avg=18.16, stdev=15.05 00:41:58.272 clat (usec): min=5152, max=40567, avg=22764.48, stdev=3145.55 00:41:58.272 lat (usec): min=5165, max=40625, avg=22782.64, stdev=3146.78 00:41:58.272 clat percentiles (usec): 00:41:58.272 | 1.00th=[ 9896], 5.00th=[15795], 10.00th=[19792], 20.00th=[22676], 00:41:58.272 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23462], 00:41:58.272 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[24773], 00:41:58.272 | 99.00th=[31065], 99.50th=[36439], 99.90th=[39584], 99.95th=[40633], 00:41:58.272 | 99.99th=[40633] 00:41:58.272 bw ( KiB/s): min= 2688, max= 3200, per=4.20%, avg=2803.37, stdev=125.26, samples=19 00:41:58.272 iops : min= 672, max= 800, avg=700.84, stdev=31.31, samples=19 00:41:58.272 lat (msec) : 10=1.00%, 20=9.15%, 50=89.85% 00:41:58.272 cpu : usr=98.83%, sys=0.84%, ctx=17, majf=0, minf=32 00:41:58.272 IO depths : 1=5.3%, 2=10.6%, 4=22.3%, 8=54.5%, 16=7.3%, 32=0.0%, >=64=0.0% 00:41:58.272 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.272 complete : 0=0.0%, 4=93.3%, 8=0.9%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.272 issued rwts: total=6994,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.272 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:58.272 filename0: (groupid=0, jobs=1): err= 0: pid=3743710: Mon Nov 25 14:40:01 2024 00:41:58.272 read: IOPS=686, BW=2746KiB/s (2812kB/s)(26.8MiB/10006msec) 00:41:58.272 slat (usec): min=4, max=119, avg=22.75, stdev=17.60 00:41:58.272 clat (usec): min=6180, max=42538, avg=23099.60, stdev=3061.77 00:41:58.272 lat (usec): min=6193, max=42551, avg=23122.35, stdev=3063.20 00:41:58.272 clat percentiles (usec): 00:41:58.272 | 1.00th=[13698], 5.00th=[16712], 10.00th=[20317], 20.00th=[22676], 00:41:58.272 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:41:58.272 | 70.00th=[23725], 80.00th=[24249], 90.00th=[24773], 95.00th=[26084], 00:41:58.272 | 99.00th=[33424], 99.50th=[35914], 99.90th=[42730], 99.95th=[42730], 00:41:58.272 | 99.99th=[42730] 00:41:58.272 bw ( KiB/s): min= 2528, max= 3104, per=4.11%, avg=2744.42, stdev=125.35, samples=19 00:41:58.272 iops : min= 632, max= 776, avg=686.11, stdev=31.34, samples=19 00:41:58.273 lat (msec) : 10=0.23%, 20=9.23%, 50=90.54% 00:41:58.273 cpu : usr=98.63%, sys=0.94%, ctx=62, majf=0, minf=56 00:41:58.273 IO depths : 1=4.5%, 2=9.1%, 4=19.7%, 8=58.2%, 16=8.4%, 32=0.0%, >=64=0.0% 00:41:58.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.273 complete : 0=0.0%, 4=92.7%, 8=2.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.273 issued rwts: total=6870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:58.273 filename0: (groupid=0, jobs=1): err= 0: pid=3743711: Mon Nov 25 14:40:01 2024 00:41:58.273 read: IOPS=696, BW=2786KiB/s (2853kB/s)(27.2MiB/10010msec) 00:41:58.273 slat (nsec): min=5453, max=94523, avg=16100.12, stdev=14745.64 00:41:58.273 clat (usec): min=7170, max=40126, avg=22840.48, stdev=3011.61 00:41:58.273 lat (usec): min=7178, max=40149, avg=22856.58, stdev=3012.86 00:41:58.273 clat percentiles (usec): 00:41:58.273 | 1.00th=[13173], 5.00th=[15795], 10.00th=[19268], 20.00th=[22676], 00:41:58.273 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23200], 60.00th=[23462], 00:41:58.273 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[25035], 00:41:58.273 | 99.00th=[32900], 99.50th=[35914], 99.90th=[39584], 99.95th=[40109], 00:41:58.273 | 99.99th=[40109] 00:41:58.273 bw ( KiB/s): min= 2688, max= 3446, per=4.18%, avg=2794.42, stdev=180.11, samples=19 00:41:58.273 iops : min= 672, max= 861, avg=698.58, stdev=44.93, samples=19 00:41:58.273 lat (msec) : 10=0.57%, 20=10.31%, 50=89.11% 00:41:58.273 cpu : usr=98.83%, sys=0.85%, ctx=19, majf=0, minf=38 00:41:58.273 IO depths : 1=4.9%, 2=10.1%, 4=21.6%, 8=55.7%, 16=7.7%, 32=0.0%, >=64=0.0% 00:41:58.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.273 complete : 0=0.0%, 4=93.2%, 8=1.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.273 issued rwts: total=6972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:58.273 filename0: (groupid=0, jobs=1): err= 0: pid=3743712: Mon Nov 25 14:40:01 2024 00:41:58.273 read: IOPS=700, BW=2801KiB/s (2869kB/s)(27.4MiB/10004msec) 00:41:58.273 slat (nsec): min=5452, max=92262, avg=16177.99, stdev=13159.02 00:41:58.273 clat (usec): min=11184, max=39605, avg=22722.41, stdev=3326.67 00:41:58.273 lat (usec): min=11195, max=39613, avg=22738.59, stdev=3328.72 00:41:58.273 clat percentiles (usec): 00:41:58.273 | 1.00th=[13042], 5.00th=[15795], 10.00th=[17433], 20.00th=[22152], 00:41:58.273 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:41:58.273 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[26084], 00:41:58.273 | 99.00th=[32900], 99.50th=[34866], 99.90th=[39584], 99.95th=[39584], 00:41:58.273 | 99.99th=[39584] 00:41:58.273 bw ( KiB/s): min= 2648, max= 3104, per=4.21%, avg=2808.11, stdev=148.71, samples=19 00:41:58.273 iops : min= 662, max= 776, avg=702.00, stdev=37.20, samples=19 00:41:58.273 lat (msec) : 20=14.56%, 50=85.44% 00:41:58.273 cpu : usr=98.95%, sys=0.74%, ctx=13, majf=0, minf=54 00:41:58.273 IO depths : 1=3.4%, 2=6.9%, 4=16.9%, 8=63.2%, 16=9.6%, 32=0.0%, >=64=0.0% 00:41:58.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.273 complete : 0=0.0%, 4=91.6%, 8=3.2%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.273 issued rwts: total=7006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:58.273 filename0: (groupid=0, jobs=1): err= 0: pid=3743713: Mon Nov 25 14:40:01 2024 00:41:58.273 read: IOPS=679, BW=2717KiB/s (2783kB/s)(26.5MiB/10001msec) 00:41:58.273 slat (nsec): min=5027, max=96768, avg=20317.69, stdev=14834.72 00:41:58.273 clat (usec): min=11425, max=37507, avg=23381.67, stdev=2023.35 00:41:58.273 lat (usec): min=11434, max=37518, avg=23401.99, stdev=2023.40 00:41:58.273 clat percentiles (usec): 00:41:58.273 | 1.00th=[15270], 5.00th=[21103], 10.00th=[22414], 20.00th=[22676], 00:41:58.273 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23462], 00:41:58.273 | 70.00th=[23725], 80.00th=[24249], 90.00th=[24511], 95.00th=[25297], 00:41:58.273 | 99.00th=[30540], 99.50th=[32900], 99.90th=[36439], 99.95th=[37487], 00:41:58.273 | 99.99th=[37487] 00:41:58.273 bw ( KiB/s): min= 2560, max= 2832, per=4.07%, avg=2718.84, stdev=78.79, samples=19 00:41:58.273 iops : min= 640, max= 708, avg=679.68, stdev=19.67, samples=19 00:41:58.273 lat (msec) : 20=4.03%, 50=95.97% 00:41:58.273 cpu : usr=98.81%, sys=0.86%, ctx=15, majf=0, minf=32 00:41:58.273 IO depths : 1=4.7%, 2=9.5%, 4=21.9%, 8=56.0%, 16=7.9%, 32=0.0%, >=64=0.0% 00:41:58.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.273 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.273 issued rwts: total=6794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:58.273 filename0: (groupid=0, jobs=1): err= 0: pid=3743714: Mon Nov 25 14:40:01 2024 00:41:58.273 read: IOPS=679, BW=2717KiB/s (2783kB/s)(26.5MiB/10004msec) 00:41:58.273 slat (usec): min=5, max=121, avg=24.85, stdev=19.17 00:41:58.273 clat (usec): min=7494, max=49024, avg=23340.97, stdev=3428.82 00:41:58.273 lat (usec): min=7500, max=49040, avg=23365.81, stdev=3430.58 00:41:58.273 clat percentiles (usec): 00:41:58.273 | 1.00th=[13960], 5.00th=[16450], 10.00th=[20841], 20.00th=[22676], 00:41:58.273 | 30.00th=[22938], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:41:58.273 | 70.00th=[23725], 80.00th=[24249], 90.00th=[25297], 95.00th=[29230], 00:41:58.273 | 99.00th=[35914], 99.50th=[37487], 99.90th=[49021], 99.95th=[49021], 00:41:58.273 | 99.99th=[49021] 00:41:58.273 bw ( KiB/s): min= 2480, max= 2928, per=4.06%, avg=2713.26, stdev=100.28, samples=19 00:41:58.273 iops : min= 620, max= 732, avg=678.32, stdev=25.07, samples=19 00:41:58.273 lat (msec) : 10=0.21%, 20=8.81%, 50=90.98% 00:41:58.273 cpu : usr=98.86%, sys=0.82%, ctx=17, majf=0, minf=35 00:41:58.273 IO depths : 1=3.7%, 2=8.0%, 4=18.7%, 8=60.1%, 16=9.5%, 32=0.0%, >=64=0.0% 00:41:58.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.273 complete : 0=0.0%, 4=92.5%, 8=2.3%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.273 issued rwts: total=6796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:58.273 filename1: (groupid=0, jobs=1): err= 0: pid=3743715: Mon Nov 25 14:40:01 2024 00:41:58.273 read: IOPS=699, BW=2798KiB/s (2865kB/s)(27.3MiB/10005msec) 00:41:58.273 slat (usec): min=4, max=127, avg=21.79, stdev=17.34 00:41:58.273 clat (usec): min=7194, max=41691, avg=22695.18, stdev=3519.73 00:41:58.273 lat (usec): min=7205, max=41705, avg=22716.97, stdev=3522.30 00:41:58.273 clat percentiles (usec): 00:41:58.273 | 1.00th=[13304], 5.00th=[15139], 10.00th=[17433], 20.00th=[22152], 00:41:58.273 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:41:58.273 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24773], 95.00th=[26870], 00:41:58.273 | 99.00th=[32900], 99.50th=[35914], 99.90th=[41157], 99.95th=[41681], 00:41:58.273 | 99.99th=[41681] 00:41:58.273 bw ( KiB/s): min= 2565, max= 3232, per=4.19%, avg=2801.11, stdev=160.89, samples=19 00:41:58.273 iops : min= 641, max= 808, avg=700.26, stdev=40.24, samples=19 00:41:58.273 lat (msec) : 10=0.26%, 20=14.85%, 50=84.90% 00:41:58.273 cpu : usr=98.92%, sys=0.74%, ctx=25, majf=0, minf=39 00:41:58.273 IO depths : 1=3.4%, 2=6.9%, 4=16.1%, 8=63.8%, 16=9.9%, 32=0.0%, >=64=0.0% 00:41:58.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.273 complete : 0=0.0%, 4=91.6%, 8=3.5%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.273 issued rwts: total=6998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:58.273 filename1: (groupid=0, jobs=1): err= 0: pid=3743716: Mon Nov 25 14:40:01 2024 00:41:58.273 read: IOPS=693, BW=2774KiB/s (2840kB/s)(27.1MiB/10003msec) 00:41:58.273 slat (usec): min=5, max=148, avg=20.59, stdev=18.44 00:41:58.273 clat (usec): min=6883, max=40894, avg=22919.39, stdev=3772.47 00:41:58.273 lat (usec): min=6892, max=40914, avg=22939.98, stdev=3774.32 00:41:58.273 clat percentiles (usec): 00:41:58.273 | 1.00th=[12387], 5.00th=[15533], 10.00th=[17695], 20.00th=[22152], 00:41:58.273 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:41:58.273 | 70.00th=[23725], 80.00th=[24249], 90.00th=[25035], 95.00th=[28443], 00:41:58.273 | 99.00th=[35914], 99.50th=[36963], 99.90th=[40633], 99.95th=[40633], 00:41:58.273 | 99.99th=[40633] 00:41:58.273 bw ( KiB/s): min= 2624, max= 2912, per=4.15%, avg=2772.21, stdev=93.73, samples=19 00:41:58.273 iops : min= 656, max= 728, avg=693.05, stdev=23.43, samples=19 00:41:58.273 lat (msec) : 10=0.26%, 20=14.06%, 50=85.68% 00:41:58.273 cpu : usr=98.84%, sys=0.75%, ctx=35, majf=0, minf=40 00:41:58.273 IO depths : 1=2.6%, 2=5.6%, 4=14.4%, 8=66.1%, 16=11.2%, 32=0.0%, >=64=0.0% 00:41:58.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.273 complete : 0=0.0%, 4=91.6%, 8=4.1%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.273 issued rwts: total=6936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:58.273 filename1: (groupid=0, jobs=1): err= 0: pid=3743717: Mon Nov 25 14:40:01 2024 00:41:58.273 read: IOPS=713, BW=2853KiB/s (2922kB/s)(27.9MiB/10010msec) 00:41:58.273 slat (usec): min=5, max=131, avg=19.84, stdev=18.09 00:41:58.273 clat (usec): min=7124, max=42187, avg=22274.03, stdev=3983.76 00:41:58.273 lat (usec): min=7133, max=42196, avg=22293.87, stdev=3986.79 00:41:58.273 clat percentiles (usec): 00:41:58.273 | 1.00th=[12649], 5.00th=[14615], 10.00th=[15926], 20.00th=[19530], 00:41:58.273 | 30.00th=[22414], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:41:58.273 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24773], 95.00th=[27395], 00:41:58.273 | 99.00th=[35390], 99.50th=[38011], 99.90th=[40633], 99.95th=[42206], 00:41:58.273 | 99.99th=[42206] 00:41:58.273 bw ( KiB/s): min= 2640, max= 3296, per=4.22%, avg=2821.89, stdev=166.74, samples=19 00:41:58.273 iops : min= 660, max= 824, avg=705.47, stdev=41.69, samples=19 00:41:58.273 lat (msec) : 10=0.55%, 20=20.46%, 50=78.99% 00:41:58.273 cpu : usr=98.84%, sys=0.83%, ctx=16, majf=0, minf=44 00:41:58.273 IO depths : 1=2.0%, 2=6.0%, 4=18.1%, 8=63.2%, 16=10.7%, 32=0.0%, >=64=0.0% 00:41:58.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.273 complete : 0=0.0%, 4=92.4%, 8=2.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.273 issued rwts: total=7140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.273 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:58.273 filename1: (groupid=0, jobs=1): err= 0: pid=3743718: Mon Nov 25 14:40:01 2024 00:41:58.273 read: IOPS=694, BW=2778KiB/s (2844kB/s)(27.1MiB/10005msec) 00:41:58.273 slat (usec): min=5, max=115, avg=19.16, stdev=15.15 00:41:58.273 clat (usec): min=6663, max=49208, avg=22885.89, stdev=3583.95 00:41:58.273 lat (usec): min=6668, max=49223, avg=22905.04, stdev=3585.89 00:41:58.274 clat percentiles (usec): 00:41:58.274 | 1.00th=[13435], 5.00th=[15795], 10.00th=[17957], 20.00th=[22414], 00:41:58.274 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:41:58.274 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24773], 95.00th=[27919], 00:41:58.274 | 99.00th=[34341], 99.50th=[36439], 99.90th=[49021], 99.95th=[49021], 00:41:58.274 | 99.99th=[49021] 00:41:58.274 bw ( KiB/s): min= 2565, max= 3056, per=4.14%, avg=2767.42, stdev=106.16, samples=19 00:41:58.274 iops : min= 641, max= 764, avg=691.84, stdev=26.57, samples=19 00:41:58.274 lat (msec) : 10=0.23%, 20=13.56%, 50=86.21% 00:41:58.274 cpu : usr=98.86%, sys=0.80%, ctx=14, majf=0, minf=41 00:41:58.274 IO depths : 1=3.5%, 2=7.1%, 4=16.5%, 8=63.2%, 16=9.7%, 32=0.0%, >=64=0.0% 00:41:58.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.274 complete : 0=0.0%, 4=91.8%, 8=3.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.274 issued rwts: total=6948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.274 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:58.274 filename1: (groupid=0, jobs=1): err= 0: pid=3743719: Mon Nov 25 14:40:01 2024 00:41:58.274 read: IOPS=701, BW=2805KiB/s (2872kB/s)(27.4MiB/10011msec) 00:41:58.274 slat (usec): min=5, max=143, avg=14.19, stdev=12.97 00:41:58.274 clat (usec): min=4721, max=42237, avg=22704.94, stdev=3587.56 00:41:58.274 lat (usec): min=4738, max=42251, avg=22719.13, stdev=3587.81 00:41:58.274 clat percentiles (usec): 00:41:58.274 | 1.00th=[11338], 5.00th=[15270], 10.00th=[18220], 20.00th=[22414], 00:41:58.274 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23462], 00:41:58.274 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[25035], 00:41:58.274 | 99.00th=[34866], 99.50th=[38536], 99.90th=[41681], 99.95th=[42206], 00:41:58.274 | 99.99th=[42206] 00:41:58.274 bw ( KiB/s): min= 2640, max= 3200, per=4.21%, avg=2814.32, stdev=148.46, samples=19 00:41:58.274 iops : min= 660, max= 800, avg=703.58, stdev=37.12, samples=19 00:41:58.274 lat (msec) : 10=0.98%, 20=11.64%, 50=87.38% 00:41:58.274 cpu : usr=98.70%, sys=0.90%, ctx=40, majf=0, minf=54 00:41:58.274 IO depths : 1=4.9%, 2=9.9%, 4=21.0%, 8=56.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:41:58.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.274 complete : 0=0.0%, 4=93.0%, 8=1.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.274 issued rwts: total=7020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.274 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:58.274 filename1: (groupid=0, jobs=1): err= 0: pid=3743720: Mon Nov 25 14:40:01 2024 00:41:58.274 read: IOPS=700, BW=2802KiB/s (2869kB/s)(27.4MiB/10003msec) 00:41:58.274 slat (usec): min=5, max=102, avg=19.31, stdev=15.57 00:41:58.274 clat (usec): min=8002, max=41086, avg=22681.68, stdev=3548.21 00:41:58.274 lat (usec): min=8011, max=41135, avg=22700.99, stdev=3549.98 00:41:58.274 clat percentiles (usec): 00:41:58.274 | 1.00th=[12518], 5.00th=[15401], 10.00th=[17433], 20.00th=[22152], 00:41:58.274 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:41:58.274 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[25822], 00:41:58.274 | 99.00th=[34866], 99.50th=[37487], 99.90th=[41157], 99.95th=[41157], 00:41:58.274 | 99.99th=[41157] 00:41:58.274 bw ( KiB/s): min= 2672, max= 3072, per=4.19%, avg=2801.95, stdev=123.82, samples=19 00:41:58.274 iops : min= 668, max= 768, avg=700.47, stdev=30.97, samples=19 00:41:58.274 lat (msec) : 10=0.23%, 20=15.03%, 50=84.74% 00:41:58.274 cpu : usr=98.86%, sys=0.81%, ctx=31, majf=0, minf=39 00:41:58.274 IO depths : 1=3.8%, 2=8.2%, 4=19.0%, 8=59.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:41:58.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.274 complete : 0=0.0%, 4=92.5%, 8=2.3%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.274 issued rwts: total=7006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.274 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:58.274 filename1: (groupid=0, jobs=1): err= 0: pid=3743721: Mon Nov 25 14:40:01 2024 00:41:58.274 read: IOPS=689, BW=2759KiB/s (2826kB/s)(27.0MiB/10014msec) 00:41:58.274 slat (nsec): min=5453, max=70755, avg=8694.01, stdev=5419.70 00:41:58.274 clat (usec): min=5329, max=34683, avg=23117.39, stdev=2131.88 00:41:58.274 lat (usec): min=5336, max=34689, avg=23126.09, stdev=2131.35 00:41:58.274 clat percentiles (usec): 00:41:58.274 | 1.00th=[13304], 5.00th=[20579], 10.00th=[22414], 20.00th=[22938], 00:41:58.274 | 30.00th=[23200], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:41:58.274 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[24773], 00:41:58.274 | 99.00th=[25035], 99.50th=[27395], 99.90th=[28967], 99.95th=[34866], 00:41:58.274 | 99.99th=[34866] 00:41:58.274 bw ( KiB/s): min= 2688, max= 2949, per=4.13%, avg=2760.68, stdev=85.83, samples=19 00:41:58.274 iops : min= 672, max= 737, avg=690.16, stdev=21.43, samples=19 00:41:58.274 lat (msec) : 10=0.55%, 20=3.94%, 50=95.51% 00:41:58.274 cpu : usr=98.92%, sys=0.75%, ctx=18, majf=0, minf=45 00:41:58.274 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:58.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.274 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.274 issued rwts: total=6908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.274 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:58.274 filename1: (groupid=0, jobs=1): err= 0: pid=3743722: Mon Nov 25 14:40:01 2024 00:41:58.274 read: IOPS=679, BW=2718KiB/s (2784kB/s)(26.6MiB/10003msec) 00:41:58.274 slat (nsec): min=5455, max=99723, avg=15166.00, stdev=13196.10 00:41:58.274 clat (usec): min=6203, max=37478, avg=23428.51, stdev=1574.42 00:41:58.274 lat (usec): min=6211, max=37487, avg=23443.68, stdev=1573.79 00:41:58.274 clat percentiles (usec): 00:41:58.274 | 1.00th=[16057], 5.00th=[22152], 10.00th=[22414], 20.00th=[22938], 00:41:58.274 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:41:58.274 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:41:58.274 | 99.00th=[27132], 99.50th=[30016], 99.90th=[36439], 99.95th=[37487], 00:41:58.274 | 99.99th=[37487] 00:41:58.274 bw ( KiB/s): min= 2648, max= 2880, per=4.07%, avg=2720.53, stdev=65.39, samples=19 00:41:58.274 iops : min= 662, max= 720, avg=680.11, stdev=16.31, samples=19 00:41:58.274 lat (msec) : 10=0.09%, 20=1.35%, 50=98.56% 00:41:58.274 cpu : usr=98.94%, sys=0.73%, ctx=16, majf=0, minf=32 00:41:58.274 IO depths : 1=4.7%, 2=9.5%, 4=21.2%, 8=56.5%, 16=8.1%, 32=0.0%, >=64=0.0% 00:41:58.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.274 complete : 0=0.0%, 4=92.6%, 8=2.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.274 issued rwts: total=6798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.274 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:58.274 filename2: (groupid=0, jobs=1): err= 0: pid=3743723: Mon Nov 25 14:40:01 2024 00:41:58.274 read: IOPS=699, BW=2797KiB/s (2864kB/s)(27.3MiB/10011msec) 00:41:58.274 slat (nsec): min=5440, max=89177, avg=16157.84, stdev=14023.40 00:41:58.274 clat (usec): min=4103, max=38459, avg=22754.53, stdev=3371.66 00:41:58.274 lat (usec): min=4118, max=38475, avg=22770.68, stdev=3372.66 00:41:58.274 clat percentiles (usec): 00:41:58.274 | 1.00th=[10814], 5.00th=[15533], 10.00th=[18220], 20.00th=[22414], 00:41:58.274 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23200], 60.00th=[23462], 00:41:58.274 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[25035], 00:41:58.274 | 99.00th=[34341], 99.50th=[35914], 99.90th=[38011], 99.95th=[38536], 00:41:58.274 | 99.99th=[38536] 00:41:58.274 bw ( KiB/s): min= 2688, max= 3248, per=4.19%, avg=2799.16, stdev=164.68, samples=19 00:41:58.274 iops : min= 672, max= 812, avg=699.79, stdev=41.17, samples=19 00:41:58.274 lat (msec) : 10=0.84%, 20=11.09%, 50=88.07% 00:41:58.274 cpu : usr=98.59%, sys=1.07%, ctx=18, majf=0, minf=43 00:41:58.274 IO depths : 1=5.1%, 2=10.2%, 4=21.6%, 8=55.6%, 16=7.6%, 32=0.0%, >=64=0.0% 00:41:58.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.274 complete : 0=0.0%, 4=93.2%, 8=1.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.274 issued rwts: total=7000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.274 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:58.274 filename2: (groupid=0, jobs=1): err= 0: pid=3743724: Mon Nov 25 14:40:01 2024 00:41:58.274 read: IOPS=679, BW=2718KiB/s (2783kB/s)(26.6MiB/10004msec) 00:41:58.274 slat (nsec): min=5449, max=96925, avg=13267.76, stdev=11951.76 00:41:58.274 clat (usec): min=7473, max=48165, avg=23487.04, stdev=3525.61 00:41:58.274 lat (usec): min=7480, max=48181, avg=23500.31, stdev=3526.22 00:41:58.274 clat percentiles (usec): 00:41:58.274 | 1.00th=[13566], 5.00th=[16712], 10.00th=[20055], 20.00th=[22676], 00:41:58.274 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:41:58.274 | 70.00th=[23987], 80.00th=[24511], 90.00th=[25822], 95.00th=[30278], 00:41:58.274 | 99.00th=[34866], 99.50th=[36963], 99.90th=[40109], 99.95th=[47973], 00:41:58.274 | 99.99th=[47973] 00:41:58.274 bw ( KiB/s): min= 2576, max= 2856, per=4.06%, avg=2711.58, stdev=74.55, samples=19 00:41:58.274 iops : min= 644, max= 714, avg=677.89, stdev=18.64, samples=19 00:41:58.274 lat (msec) : 10=0.21%, 20=9.77%, 50=90.03% 00:41:58.274 cpu : usr=98.98%, sys=0.68%, ctx=14, majf=0, minf=37 00:41:58.274 IO depths : 1=0.1%, 2=0.3%, 4=2.9%, 8=80.3%, 16=16.4%, 32=0.0%, >=64=0.0% 00:41:58.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.274 complete : 0=0.0%, 4=89.3%, 8=8.9%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.274 issued rwts: total=6798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.274 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:58.274 filename2: (groupid=0, jobs=1): err= 0: pid=3743725: Mon Nov 25 14:40:01 2024 00:41:58.274 read: IOPS=698, BW=2793KiB/s (2860kB/s)(27.3MiB/10012msec) 00:41:58.274 slat (nsec): min=5440, max=88298, avg=16968.23, stdev=14171.57 00:41:58.274 clat (usec): min=7069, max=40562, avg=22780.78, stdev=3086.96 00:41:58.274 lat (usec): min=7076, max=40570, avg=22797.75, stdev=3088.18 00:41:58.274 clat percentiles (usec): 00:41:58.274 | 1.00th=[13042], 5.00th=[15926], 10.00th=[18482], 20.00th=[22414], 00:41:58.274 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23200], 60.00th=[23462], 00:41:58.274 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[25560], 00:41:58.274 | 99.00th=[32375], 99.50th=[33817], 99.90th=[34866], 99.95th=[40633], 00:41:58.274 | 99.99th=[40633] 00:41:58.274 bw ( KiB/s): min= 2688, max= 2960, per=4.19%, avg=2796.63, stdev=103.63, samples=19 00:41:58.274 iops : min= 672, max= 740, avg=699.16, stdev=25.91, samples=19 00:41:58.274 lat (msec) : 10=0.46%, 20=12.16%, 50=87.38% 00:41:58.274 cpu : usr=98.82%, sys=0.85%, ctx=15, majf=0, minf=46 00:41:58.274 IO depths : 1=4.9%, 2=9.9%, 4=21.0%, 8=56.5%, 16=7.7%, 32=0.0%, >=64=0.0% 00:41:58.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.274 complete : 0=0.0%, 4=93.0%, 8=1.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.274 issued rwts: total=6990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.275 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:58.275 filename2: (groupid=0, jobs=1): err= 0: pid=3743726: Mon Nov 25 14:40:01 2024 00:41:58.275 read: IOPS=692, BW=2771KiB/s (2837kB/s)(27.1MiB/10010msec) 00:41:58.275 slat (nsec): min=5461, max=95445, avg=18472.37, stdev=14447.59 00:41:58.275 clat (usec): min=7960, max=38366, avg=22945.30, stdev=2984.74 00:41:58.275 lat (usec): min=7968, max=38372, avg=22963.77, stdev=2986.33 00:41:58.275 clat percentiles (usec): 00:41:58.275 | 1.00th=[13960], 5.00th=[16450], 10.00th=[19268], 20.00th=[22676], 00:41:58.275 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23200], 60.00th=[23462], 00:41:58.275 | 70.00th=[23725], 80.00th=[24249], 90.00th=[24511], 95.00th=[25560], 00:41:58.275 | 99.00th=[32637], 99.50th=[34866], 99.90th=[37487], 99.95th=[37487], 00:41:58.275 | 99.99th=[38536] 00:41:58.275 bw ( KiB/s): min= 2560, max= 2880, per=4.15%, avg=2772.74, stdev=95.42, samples=19 00:41:58.275 iops : min= 640, max= 720, avg=693.16, stdev=23.85, samples=19 00:41:58.275 lat (msec) : 10=0.23%, 20=10.74%, 50=89.03% 00:41:58.275 cpu : usr=98.82%, sys=0.85%, ctx=15, majf=0, minf=31 00:41:58.275 IO depths : 1=4.4%, 2=8.9%, 4=19.5%, 8=58.8%, 16=8.3%, 32=0.0%, >=64=0.0% 00:41:58.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.275 complete : 0=0.0%, 4=92.6%, 8=2.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.275 issued rwts: total=6934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.275 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:58.275 filename2: (groupid=0, jobs=1): err= 0: pid=3743727: Mon Nov 25 14:40:01 2024 00:41:58.275 read: IOPS=685, BW=2743KiB/s (2809kB/s)(26.8MiB/10015msec) 00:41:58.275 slat (usec): min=5, max=114, avg=21.37, stdev=16.19 00:41:58.275 clat (usec): min=8402, max=41554, avg=23151.19, stdev=3194.71 00:41:58.275 lat (usec): min=8410, max=41605, avg=23172.56, stdev=3196.42 00:41:58.275 clat percentiles (usec): 00:41:58.275 | 1.00th=[13566], 5.00th=[16450], 10.00th=[20055], 20.00th=[22676], 00:41:58.275 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23462], 00:41:58.275 | 70.00th=[23725], 80.00th=[24249], 90.00th=[24773], 95.00th=[27132], 00:41:58.275 | 99.00th=[33817], 99.50th=[36963], 99.90th=[41157], 99.95th=[41681], 00:41:58.275 | 99.99th=[41681] 00:41:58.275 bw ( KiB/s): min= 2560, max= 3008, per=4.10%, avg=2736.00, stdev=122.55, samples=19 00:41:58.275 iops : min= 640, max= 752, avg=684.00, stdev=30.64, samples=19 00:41:58.275 lat (msec) : 10=0.23%, 20=9.71%, 50=90.06% 00:41:58.275 cpu : usr=98.73%, sys=0.95%, ctx=26, majf=0, minf=33 00:41:58.275 IO depths : 1=5.0%, 2=10.1%, 4=21.3%, 8=55.9%, 16=7.7%, 32=0.0%, >=64=0.0% 00:41:58.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.275 complete : 0=0.0%, 4=93.1%, 8=1.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.275 issued rwts: total=6868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.275 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:58.275 filename2: (groupid=0, jobs=1): err= 0: pid=3743728: Mon Nov 25 14:40:01 2024 00:41:58.275 read: IOPS=679, BW=2717KiB/s (2783kB/s)(26.5MiB/10004msec) 00:41:58.275 slat (nsec): min=5444, max=85096, avg=12817.26, stdev=11027.69 00:41:58.275 clat (usec): min=6149, max=48191, avg=23498.01, stdev=3646.29 00:41:58.275 lat (usec): min=6155, max=48208, avg=23510.83, stdev=3647.18 00:41:58.275 clat percentiles (usec): 00:41:58.275 | 1.00th=[12780], 5.00th=[17433], 10.00th=[19792], 20.00th=[22414], 00:41:58.275 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23462], 60.00th=[23725], 00:41:58.275 | 70.00th=[23987], 80.00th=[24511], 90.00th=[26870], 95.00th=[29754], 00:41:58.275 | 99.00th=[36439], 99.50th=[38011], 99.90th=[40633], 99.95th=[47973], 00:41:58.275 | 99.99th=[47973] 00:41:58.275 bw ( KiB/s): min= 2608, max= 2832, per=4.05%, avg=2706.53, stdev=71.85, samples=19 00:41:58.275 iops : min= 652, max= 708, avg=676.63, stdev=17.96, samples=19 00:41:58.275 lat (msec) : 10=0.34%, 20=10.58%, 50=89.08% 00:41:58.275 cpu : usr=98.95%, sys=0.71%, ctx=15, majf=0, minf=52 00:41:58.275 IO depths : 1=0.2%, 2=0.4%, 4=3.4%, 8=79.8%, 16=16.2%, 32=0.0%, >=64=0.0% 00:41:58.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.275 complete : 0=0.0%, 4=89.4%, 8=8.6%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.275 issued rwts: total=6796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.275 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:58.275 filename2: (groupid=0, jobs=1): err= 0: pid=3743729: Mon Nov 25 14:40:01 2024 00:41:58.275 read: IOPS=698, BW=2795KiB/s (2863kB/s)(27.3MiB/10002msec) 00:41:58.275 slat (nsec): min=5426, max=92535, avg=17487.92, stdev=13728.37 00:41:58.275 clat (usec): min=7264, max=40203, avg=22759.82, stdev=3641.34 00:41:58.275 lat (usec): min=7305, max=40212, avg=22777.30, stdev=3642.88 00:41:58.275 clat percentiles (usec): 00:41:58.275 | 1.00th=[12256], 5.00th=[15795], 10.00th=[17695], 20.00th=[21890], 00:41:58.275 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:41:58.275 | 70.00th=[23725], 80.00th=[24249], 90.00th=[25035], 95.00th=[29230], 00:41:58.275 | 99.00th=[34341], 99.50th=[35390], 99.90th=[38011], 99.95th=[40109], 00:41:58.275 | 99.99th=[40109] 00:41:58.275 bw ( KiB/s): min= 2560, max= 3024, per=4.19%, avg=2801.37, stdev=120.75, samples=19 00:41:58.275 iops : min= 640, max= 756, avg=700.32, stdev=30.22, samples=19 00:41:58.275 lat (msec) : 10=0.26%, 20=16.07%, 50=83.68% 00:41:58.275 cpu : usr=98.97%, sys=0.70%, ctx=15, majf=0, minf=29 00:41:58.275 IO depths : 1=3.3%, 2=6.6%, 4=15.3%, 8=64.7%, 16=10.1%, 32=0.0%, >=64=0.0% 00:41:58.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.275 complete : 0=0.0%, 4=91.5%, 8=3.7%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.275 issued rwts: total=6990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.275 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:58.275 filename2: (groupid=0, jobs=1): err= 0: pid=3743730: Mon Nov 25 14:40:01 2024 00:41:58.275 read: IOPS=713, BW=2855KiB/s (2924kB/s)(27.9MiB/10009msec) 00:41:58.275 slat (usec): min=5, max=102, avg=16.48, stdev=14.39 00:41:58.275 clat (usec): min=8180, max=38593, avg=22288.88, stdev=3995.46 00:41:58.275 lat (usec): min=8185, max=38635, avg=22305.35, stdev=3998.17 00:41:58.275 clat percentiles (usec): 00:41:58.275 | 1.00th=[12649], 5.00th=[15139], 10.00th=[16188], 20.00th=[19006], 00:41:58.275 | 30.00th=[22152], 40.00th=[22676], 50.00th=[22938], 60.00th=[23462], 00:41:58.275 | 70.00th=[23725], 80.00th=[24249], 90.00th=[25560], 95.00th=[29230], 00:41:58.275 | 99.00th=[33424], 99.50th=[34866], 99.90th=[37487], 99.95th=[38536], 00:41:58.275 | 99.99th=[38536] 00:41:58.275 bw ( KiB/s): min= 2608, max= 3120, per=4.26%, avg=2846.84, stdev=147.31, samples=19 00:41:58.275 iops : min= 652, max= 780, avg=711.68, stdev=36.83, samples=19 00:41:58.275 lat (msec) : 10=0.22%, 20=23.71%, 50=76.06% 00:41:58.275 cpu : usr=98.96%, sys=0.71%, ctx=15, majf=0, minf=30 00:41:58.275 IO depths : 1=2.4%, 2=4.8%, 4=12.2%, 8=69.4%, 16=11.3%, 32=0.0%, >=64=0.0% 00:41:58.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.275 complete : 0=0.0%, 4=90.7%, 8=4.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:58.275 issued rwts: total=7144,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:58.275 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:58.275 00:41:58.275 Run status group 0 (all jobs): 00:41:58.275 READ: bw=65.2MiB/s (68.4MB/s), 2717KiB/s-2923KiB/s (2783kB/s-2993kB/s), io=653MiB (685MB), run=10001-10016msec 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:58.275 14:40:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:58.275 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:58.275 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:41:58.275 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:41:58.275 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:41:58.275 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:41:58.275 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:41:58.275 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:58.276 bdev_null0 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:58.276 [2024-11-25 14:40:02.052897] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:58.276 bdev_null1 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:58.276 { 00:41:58.276 "params": { 00:41:58.276 "name": "Nvme$subsystem", 00:41:58.276 "trtype": "$TEST_TRANSPORT", 00:41:58.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:58.276 "adrfam": "ipv4", 00:41:58.276 "trsvcid": "$NVMF_PORT", 00:41:58.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:58.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:58.276 "hdgst": ${hdgst:-false}, 00:41:58.276 "ddgst": ${ddgst:-false} 00:41:58.276 }, 00:41:58.276 "method": "bdev_nvme_attach_controller" 00:41:58.276 } 00:41:58.276 EOF 00:41:58.276 )") 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:58.276 { 00:41:58.276 "params": { 00:41:58.276 "name": "Nvme$subsystem", 00:41:58.276 "trtype": "$TEST_TRANSPORT", 00:41:58.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:58.276 "adrfam": "ipv4", 00:41:58.276 "trsvcid": "$NVMF_PORT", 00:41:58.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:58.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:58.276 "hdgst": ${hdgst:-false}, 00:41:58.276 "ddgst": ${ddgst:-false} 00:41:58.276 }, 00:41:58.276 "method": "bdev_nvme_attach_controller" 00:41:58.276 } 00:41:58.276 EOF 00:41:58.276 )") 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:58.276 "params": { 00:41:58.276 "name": "Nvme0", 00:41:58.276 "trtype": "tcp", 00:41:58.276 "traddr": "10.0.0.2", 00:41:58.276 "adrfam": "ipv4", 00:41:58.276 "trsvcid": "4420", 00:41:58.276 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:58.276 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:58.276 "hdgst": false, 00:41:58.276 "ddgst": false 00:41:58.276 }, 00:41:58.276 "method": "bdev_nvme_attach_controller" 00:41:58.276 },{ 00:41:58.276 "params": { 00:41:58.276 "name": "Nvme1", 00:41:58.276 "trtype": "tcp", 00:41:58.276 "traddr": "10.0.0.2", 00:41:58.276 "adrfam": "ipv4", 00:41:58.276 "trsvcid": "4420", 00:41:58.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:58.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:58.276 "hdgst": false, 00:41:58.276 "ddgst": false 00:41:58.276 }, 00:41:58.276 "method": "bdev_nvme_attach_controller" 00:41:58.276 }' 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:58.276 14:40:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:58.276 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:58.276 ... 00:41:58.276 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:58.276 ... 00:41:58.276 fio-3.35 00:41:58.276 Starting 4 threads 00:42:03.550 00:42:03.550 filename0: (groupid=0, jobs=1): err= 0: pid=3745953: Mon Nov 25 14:40:08 2024 00:42:03.550 read: IOPS=2977, BW=23.3MiB/s (24.4MB/s)(116MiB/5002msec) 00:42:03.550 slat (nsec): min=5441, max=81779, avg=6183.72, stdev=2467.97 00:42:03.550 clat (usec): min=856, max=5137, avg=2670.51, stdev=321.34 00:42:03.550 lat (usec): min=869, max=5167, avg=2676.69, stdev=321.27 00:42:03.550 clat percentiles (usec): 00:42:03.550 | 1.00th=[ 1958], 5.00th=[ 2180], 10.00th=[ 2376], 20.00th=[ 2540], 00:42:03.551 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:42:03.551 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2868], 95.00th=[ 3228], 00:42:03.551 | 99.00th=[ 4015], 99.50th=[ 4080], 99.90th=[ 4490], 99.95th=[ 4817], 00:42:03.551 | 99.99th=[ 5080] 00:42:03.551 bw ( KiB/s): min=23616, max=24032, per=25.19%, avg=23825.60, stdev=142.20, samples=10 00:42:03.551 iops : min= 2952, max= 3004, avg=2978.20, stdev=17.78, samples=10 00:42:03.551 lat (usec) : 1000=0.01% 00:42:03.551 lat (msec) : 2=1.51%, 4=97.44%, 10=1.05% 00:42:03.551 cpu : usr=96.02%, sys=3.70%, ctx=7, majf=0, minf=9 00:42:03.551 IO depths : 1=0.1%, 2=0.2%, 4=70.9%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:03.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.551 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.551 issued rwts: total=14893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:03.551 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:03.551 filename0: (groupid=0, jobs=1): err= 0: pid=3745954: Mon Nov 25 14:40:08 2024 00:42:03.551 read: IOPS=2925, BW=22.9MiB/s (24.0MB/s)(114MiB/5001msec) 00:42:03.551 slat (nsec): min=5448, max=26065, avg=6180.73, stdev=1906.46 00:42:03.551 clat (usec): min=1213, max=45721, avg=2719.03, stdev=1035.78 00:42:03.551 lat (usec): min=1218, max=45747, avg=2725.21, stdev=1035.91 00:42:03.551 clat percentiles (usec): 00:42:03.551 | 1.00th=[ 2114], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2606], 00:42:03.551 | 30.00th=[ 2671], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2704], 00:42:03.551 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2900], 95.00th=[ 2999], 00:42:03.551 | 99.00th=[ 3884], 99.50th=[ 4047], 99.90th=[ 4621], 99.95th=[45876], 00:42:03.551 | 99.99th=[45876] 00:42:03.551 bw ( KiB/s): min=21328, max=23728, per=24.71%, avg=23367.11, stdev=775.01, samples=9 00:42:03.551 iops : min= 2666, max= 2966, avg=2920.89, stdev=96.88, samples=9 00:42:03.551 lat (msec) : 2=0.42%, 4=98.82%, 10=0.70%, 50=0.05% 00:42:03.551 cpu : usr=95.58%, sys=4.16%, ctx=8, majf=0, minf=9 00:42:03.551 IO depths : 1=0.1%, 2=0.1%, 4=69.8%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:03.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.551 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.551 issued rwts: total=14631,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:03.551 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:03.551 filename1: (groupid=0, jobs=1): err= 0: pid=3745955: Mon Nov 25 14:40:08 2024 00:42:03.551 read: IOPS=2937, BW=22.9MiB/s (24.1MB/s)(115MiB/5002msec) 00:42:03.551 slat (nsec): min=5442, max=99100, avg=7686.24, stdev=2939.47 00:42:03.551 clat (usec): min=929, max=4651, avg=2701.89, stdev=261.37 00:42:03.551 lat (usec): min=937, max=4677, avg=2709.57, stdev=261.24 00:42:03.551 clat percentiles (usec): 00:42:03.551 | 1.00th=[ 2073], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2606], 00:42:03.551 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:42:03.551 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2933], 95.00th=[ 3097], 00:42:03.551 | 99.00th=[ 3851], 99.50th=[ 4015], 99.90th=[ 4293], 99.95th=[ 4490], 00:42:03.551 | 99.99th=[ 4621] 00:42:03.551 bw ( KiB/s): min=23280, max=23840, per=24.85%, avg=23505.78, stdev=208.38, samples=9 00:42:03.551 iops : min= 2910, max= 2980, avg=2938.22, stdev=26.05, samples=9 00:42:03.551 lat (usec) : 1000=0.03% 00:42:03.551 lat (msec) : 2=0.61%, 4=98.75%, 10=0.61% 00:42:03.551 cpu : usr=96.20%, sys=3.52%, ctx=8, majf=0, minf=9 00:42:03.551 IO depths : 1=0.1%, 2=0.3%, 4=73.5%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:03.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.551 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.551 issued rwts: total=14692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:03.551 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:03.551 filename1: (groupid=0, jobs=1): err= 0: pid=3745956: Mon Nov 25 14:40:08 2024 00:42:03.551 read: IOPS=2981, BW=23.3MiB/s (24.4MB/s)(117MiB/5002msec) 00:42:03.551 slat (nsec): min=5441, max=71717, avg=7562.40, stdev=2876.74 00:42:03.551 clat (usec): min=923, max=5722, avg=2663.85, stdev=254.15 00:42:03.551 lat (usec): min=931, max=5752, avg=2671.41, stdev=254.11 00:42:03.551 clat percentiles (usec): 00:42:03.551 | 1.00th=[ 1991], 5.00th=[ 2278], 10.00th=[ 2442], 20.00th=[ 2540], 00:42:03.551 | 30.00th=[ 2638], 40.00th=[ 2671], 50.00th=[ 2671], 60.00th=[ 2671], 00:42:03.551 | 70.00th=[ 2704], 80.00th=[ 2704], 90.00th=[ 2868], 95.00th=[ 2966], 00:42:03.551 | 99.00th=[ 3720], 99.50th=[ 3884], 99.90th=[ 4490], 99.95th=[ 5407], 00:42:03.551 | 99.99th=[ 5473] 00:42:03.551 bw ( KiB/s): min=23680, max=23984, per=25.23%, avg=23859.56, stdev=115.87, samples=9 00:42:03.551 iops : min= 2960, max= 2998, avg=2982.44, stdev=14.48, samples=9 00:42:03.551 lat (usec) : 1000=0.01% 00:42:03.551 lat (msec) : 2=1.02%, 4=98.68%, 10=0.29% 00:42:03.551 cpu : usr=96.06%, sys=3.68%, ctx=7, majf=0, minf=9 00:42:03.551 IO depths : 1=0.1%, 2=0.2%, 4=69.3%, 8=30.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:03.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.551 complete : 0=0.0%, 4=94.8%, 8=5.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:03.551 issued rwts: total=14915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:03.551 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:03.551 00:42:03.551 Run status group 0 (all jobs): 00:42:03.551 READ: bw=92.4MiB/s (96.8MB/s), 22.9MiB/s-23.3MiB/s (24.0MB/s-24.4MB/s), io=462MiB (484MB), run=5001-5002msec 00:42:03.551 14:40:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:42:03.551 14:40:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:03.551 14:40:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:03.551 14:40:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:03.551 14:40:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:03.551 14:40:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:03.551 14:40:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.551 14:40:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:03.551 14:40:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.551 14:40:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:03.551 14:40:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.551 14:40:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:03.551 14:40:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.551 14:40:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:03.551 14:40:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:03.551 14:40:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:03.551 14:40:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:03.551 14:40:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.551 14:40:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:03.551 14:40:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.551 14:40:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:03.551 14:40:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.551 14:40:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:03.551 14:40:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.551 00:42:03.551 real 0m24.425s 00:42:03.551 user 5m21.328s 00:42:03.551 sys 0m4.636s 00:42:03.551 14:40:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:03.551 14:40:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:03.551 ************************************ 00:42:03.551 END TEST fio_dif_rand_params 00:42:03.551 ************************************ 00:42:03.551 14:40:08 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:42:03.551 14:40:08 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:03.551 14:40:08 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:03.551 14:40:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:03.551 ************************************ 00:42:03.551 START TEST fio_dif_digest 00:42:03.551 ************************************ 00:42:03.551 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:42:03.551 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:42:03.551 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:42:03.551 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:42:03.551 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:42:03.551 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:42:03.551 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:42:03.551 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:42:03.551 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:42:03.551 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:42:03.551 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:42:03.551 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:42:03.551 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:42:03.551 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:42:03.551 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:42:03.551 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:42:03.551 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:03.551 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.551 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:03.551 bdev_null0 00:42:03.551 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.551 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:03.551 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.551 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:03.551 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.551 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:03.551 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:03.552 [2024-11-25 14:40:08.605012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:03.552 { 00:42:03.552 "params": { 00:42:03.552 "name": "Nvme$subsystem", 00:42:03.552 "trtype": "$TEST_TRANSPORT", 00:42:03.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:03.552 "adrfam": "ipv4", 00:42:03.552 "trsvcid": "$NVMF_PORT", 00:42:03.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:03.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:03.552 "hdgst": ${hdgst:-false}, 00:42:03.552 "ddgst": ${ddgst:-false} 00:42:03.552 }, 00:42:03.552 "method": "bdev_nvme_attach_controller" 00:42:03.552 } 00:42:03.552 EOF 00:42:03.552 )") 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:42:03.552 14:40:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:03.552 "params": { 00:42:03.552 "name": "Nvme0", 00:42:03.552 "trtype": "tcp", 00:42:03.552 "traddr": "10.0.0.2", 00:42:03.552 "adrfam": "ipv4", 00:42:03.552 "trsvcid": "4420", 00:42:03.552 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:03.552 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:03.552 "hdgst": true, 00:42:03.552 "ddgst": true 00:42:03.552 }, 00:42:03.552 "method": "bdev_nvme_attach_controller" 00:42:03.552 }' 00:42:03.814 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:03.814 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:03.814 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:03.814 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:03.814 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:03.814 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:03.814 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:03.814 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:03.814 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:03.814 14:40:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:04.084 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:04.084 ... 00:42:04.084 fio-3.35 00:42:04.084 Starting 3 threads 00:42:16.318 00:42:16.318 filename0: (groupid=0, jobs=1): err= 0: pid=3747432: Mon Nov 25 14:40:19 2024 00:42:16.318 read: IOPS=316, BW=39.5MiB/s (41.5MB/s)(397MiB/10045msec) 00:42:16.318 slat (nsec): min=5710, max=32502, avg=7192.44, stdev=1500.58 00:42:16.318 clat (usec): min=5488, max=52220, avg=9463.53, stdev=1691.60 00:42:16.318 lat (usec): min=5495, max=52227, avg=9470.73, stdev=1691.57 00:42:16.318 clat percentiles (usec): 00:42:16.318 | 1.00th=[ 6325], 5.00th=[ 6980], 10.00th=[ 7373], 20.00th=[ 8160], 00:42:16.318 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10028], 00:42:16.318 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10945], 95.00th=[11338], 00:42:16.318 | 99.00th=[11994], 99.50th=[12256], 99.90th=[12780], 99.95th=[49021], 00:42:16.318 | 99.99th=[52167] 00:42:16.318 bw ( KiB/s): min=37888, max=44288, per=35.23%, avg=40636.10, stdev=1914.43, samples=20 00:42:16.318 iops : min= 296, max= 346, avg=317.45, stdev=14.98, samples=20 00:42:16.318 lat (msec) : 10=61.44%, 20=38.50%, 50=0.03%, 100=0.03% 00:42:16.318 cpu : usr=93.67%, sys=6.07%, ctx=18, majf=0, minf=54 00:42:16.318 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:16.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.318 issued rwts: total=3177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.318 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:16.318 filename0: (groupid=0, jobs=1): err= 0: pid=3747433: Mon Nov 25 14:40:19 2024 00:42:16.318 read: IOPS=279, BW=34.9MiB/s (36.6MB/s)(351MiB/10043msec) 00:42:16.318 slat (nsec): min=5852, max=36935, avg=7264.27, stdev=1661.89 00:42:16.318 clat (usec): min=5463, max=90645, avg=10706.64, stdev=8777.27 00:42:16.318 lat (usec): min=5472, max=90655, avg=10713.91, stdev=8777.27 00:42:16.318 clat percentiles (usec): 00:42:16.318 | 1.00th=[ 7046], 5.00th=[ 7701], 10.00th=[ 7963], 20.00th=[ 8291], 00:42:16.318 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9110], 00:42:16.318 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[10159], 95.00th=[10814], 00:42:16.318 | 99.00th=[50594], 99.50th=[51119], 99.90th=[90702], 99.95th=[90702], 00:42:16.318 | 99.99th=[90702] 00:42:16.318 bw ( KiB/s): min=25344, max=44032, per=31.14%, avg=35916.80, stdev=5319.63, samples=20 00:42:16.318 iops : min= 198, max= 344, avg=280.60, stdev=41.56, samples=20 00:42:16.318 lat (msec) : 10=87.82%, 20=8.05%, 50=2.46%, 100=1.67% 00:42:16.318 cpu : usr=93.84%, sys=5.91%, ctx=21, majf=0, minf=195 00:42:16.318 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:16.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.318 issued rwts: total=2808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.318 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:16.318 filename0: (groupid=0, jobs=1): err= 0: pid=3747434: Mon Nov 25 14:40:19 2024 00:42:16.318 read: IOPS=306, BW=38.3MiB/s (40.2MB/s)(383MiB/10004msec) 00:42:16.318 slat (nsec): min=5828, max=30672, avg=7160.04, stdev=1348.06 00:42:16.318 clat (usec): min=4364, max=52048, avg=9776.11, stdev=2019.43 00:42:16.318 lat (usec): min=4370, max=52079, avg=9783.27, stdev=2019.71 00:42:16.318 clat percentiles (usec): 00:42:16.318 | 1.00th=[ 5997], 5.00th=[ 6915], 10.00th=[ 7308], 20.00th=[ 8356], 00:42:16.318 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10290], 00:42:16.318 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11469], 95.00th=[11863], 00:42:16.318 | 99.00th=[12649], 99.50th=[13042], 99.90th=[13829], 99.95th=[52167], 00:42:16.318 | 99.99th=[52167] 00:42:16.318 bw ( KiB/s): min=33792, max=43520, per=34.01%, avg=39235.37, stdev=2515.30, samples=19 00:42:16.318 iops : min= 264, max= 340, avg=306.53, stdev=19.65, samples=19 00:42:16.318 lat (msec) : 10=49.33%, 20=50.57%, 100=0.10% 00:42:16.318 cpu : usr=93.24%, sys=6.49%, ctx=17, majf=0, minf=151 00:42:16.318 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:16.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:16.318 issued rwts: total=3067,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:16.319 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:16.319 00:42:16.319 Run status group 0 (all jobs): 00:42:16.319 READ: bw=113MiB/s (118MB/s), 34.9MiB/s-39.5MiB/s (36.6MB/s-41.5MB/s), io=1132MiB (1186MB), run=10004-10045msec 00:42:16.319 14:40:19 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:42:16.319 14:40:19 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:42:16.319 14:40:19 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:42:16.319 14:40:19 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:16.319 14:40:19 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:42:16.319 14:40:19 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:16.319 14:40:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.319 14:40:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:16.319 14:40:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.319 14:40:19 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:16.319 14:40:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.319 14:40:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:16.319 14:40:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.319 00:42:16.319 real 0m11.269s 00:42:16.319 user 0m46.167s 00:42:16.319 sys 0m2.174s 00:42:16.319 14:40:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:16.319 14:40:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:16.319 ************************************ 00:42:16.319 END TEST fio_dif_digest 00:42:16.319 ************************************ 00:42:16.319 14:40:19 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:42:16.319 14:40:19 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:42:16.319 14:40:19 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:16.319 14:40:19 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:42:16.319 14:40:19 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:16.319 14:40:19 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:42:16.319 14:40:19 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:16.319 14:40:19 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:16.319 rmmod nvme_tcp 00:42:16.319 rmmod nvme_fabrics 00:42:16.319 rmmod nvme_keyring 00:42:16.319 14:40:19 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:16.319 14:40:19 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:42:16.319 14:40:19 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:42:16.319 14:40:19 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3737192 ']' 00:42:16.319 14:40:19 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3737192 00:42:16.319 14:40:19 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3737192 ']' 00:42:16.319 14:40:19 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3737192 00:42:16.319 14:40:19 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:42:16.319 14:40:19 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:16.319 14:40:19 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3737192 00:42:16.319 14:40:20 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:16.319 14:40:20 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:16.319 14:40:20 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3737192' 00:42:16.319 killing process with pid 3737192 00:42:16.319 14:40:20 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3737192 00:42:16.319 14:40:20 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3737192 00:42:16.319 14:40:20 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:42:16.319 14:40:20 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:18.950 Waiting for block devices as requested 00:42:18.950 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:18.950 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:18.950 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:18.950 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:18.950 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:18.950 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:18.950 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:19.223 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:19.223 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:42:19.484 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:19.484 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:19.484 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:19.484 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:19.745 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:19.745 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:19.745 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:20.006 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:20.268 14:40:25 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:20.268 14:40:25 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:20.268 14:40:25 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:42:20.268 14:40:25 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:42:20.268 14:40:25 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:20.268 14:40:25 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:42:20.268 14:40:25 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:20.268 14:40:25 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:20.268 14:40:25 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:20.268 14:40:25 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:20.268 14:40:25 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:22.815 14:40:27 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:22.815 00:42:22.815 real 1m18.607s 00:42:22.815 user 8m7.884s 00:42:22.815 sys 0m22.713s 00:42:22.815 14:40:27 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:22.815 14:40:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:22.815 ************************************ 00:42:22.815 END TEST nvmf_dif 00:42:22.815 ************************************ 00:42:22.815 14:40:27 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:22.815 14:40:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:22.815 14:40:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:22.815 14:40:27 -- common/autotest_common.sh@10 -- # set +x 00:42:22.815 ************************************ 00:42:22.815 START TEST nvmf_abort_qd_sizes 00:42:22.815 ************************************ 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:22.815 * Looking for test storage... 00:42:22.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:22.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:22.815 --rc genhtml_branch_coverage=1 00:42:22.815 --rc genhtml_function_coverage=1 00:42:22.815 --rc genhtml_legend=1 00:42:22.815 --rc geninfo_all_blocks=1 00:42:22.815 --rc geninfo_unexecuted_blocks=1 00:42:22.815 00:42:22.815 ' 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:22.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:22.815 --rc genhtml_branch_coverage=1 00:42:22.815 --rc genhtml_function_coverage=1 00:42:22.815 --rc genhtml_legend=1 00:42:22.815 --rc geninfo_all_blocks=1 00:42:22.815 --rc geninfo_unexecuted_blocks=1 00:42:22.815 00:42:22.815 ' 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:22.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:22.815 --rc genhtml_branch_coverage=1 00:42:22.815 --rc genhtml_function_coverage=1 00:42:22.815 --rc genhtml_legend=1 00:42:22.815 --rc geninfo_all_blocks=1 00:42:22.815 --rc geninfo_unexecuted_blocks=1 00:42:22.815 00:42:22.815 ' 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:22.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:22.815 --rc genhtml_branch_coverage=1 00:42:22.815 --rc genhtml_function_coverage=1 00:42:22.815 --rc genhtml_legend=1 00:42:22.815 --rc geninfo_all_blocks=1 00:42:22.815 --rc geninfo_unexecuted_blocks=1 00:42:22.815 00:42:22.815 ' 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:22.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:22.815 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:22.816 14:40:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:42:22.816 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:22.816 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:22.816 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:22.816 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:22.816 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:22.816 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:22.816 14:40:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:22.816 14:40:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:22.816 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:22.816 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:22.816 14:40:27 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:42:22.816 14:40:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:42:30.964 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:42:30.964 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:30.964 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:42:30.965 Found net devices under 0000:4b:00.0: cvl_0_0 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:42:30.965 Found net devices under 0000:4b:00.1: cvl_0_1 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:30.965 14:40:34 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:30.965 14:40:35 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:30.965 14:40:35 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:30.965 14:40:35 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:30.965 14:40:35 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:30.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:30.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:42:30.965 00:42:30.965 --- 10.0.0.2 ping statistics --- 00:42:30.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:30.965 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:42:30.965 14:40:35 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:30.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:30.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:42:30.965 00:42:30.965 --- 10.0.0.1 ping statistics --- 00:42:30.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:30.965 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:42:30.965 14:40:35 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:30.965 14:40:35 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:42:30.965 14:40:35 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:42:30.965 14:40:35 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:33.510 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:42:33.510 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:42:33.510 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:42:33.510 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:42:33.510 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:42:33.510 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:42:33.510 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:42:33.770 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:42:33.770 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:42:33.770 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:42:33.770 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:42:33.770 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:42:33.771 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:42:33.771 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:42:33.771 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:42:33.771 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:42:33.771 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:42:34.342 14:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:34.342 14:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:34.342 14:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:34.342 14:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:34.342 14:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:34.342 14:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:34.342 14:40:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:42:34.342 14:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:34.342 14:40:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:34.342 14:40:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:34.342 14:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3756860 00:42:34.342 14:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3756860 00:42:34.342 14:40:39 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:42:34.342 14:40:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3756860 ']' 00:42:34.342 14:40:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:34.342 14:40:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:34.342 14:40:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:34.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:34.342 14:40:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:34.342 14:40:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:34.342 [2024-11-25 14:40:39.243483] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:42:34.342 [2024-11-25 14:40:39.243544] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:34.342 [2024-11-25 14:40:39.341649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:34.343 [2024-11-25 14:40:39.388366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:34.343 [2024-11-25 14:40:39.388423] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:34.343 [2024-11-25 14:40:39.388432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:34.343 [2024-11-25 14:40:39.388439] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:34.343 [2024-11-25 14:40:39.388445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:34.343 [2024-11-25 14:40:39.390488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:34.343 [2024-11-25 14:40:39.390647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:34.343 [2024-11-25 14:40:39.390806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:34.343 [2024-11-25 14:40:39.390807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:35.287 14:40:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:35.287 14:40:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:42:35.287 14:40:40 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:35.287 14:40:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:35.287 14:40:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:35.287 14:40:40 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:35.287 14:40:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:42:35.287 14:40:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:42:35.287 14:40:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:42:35.287 14:40:40 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:42:35.287 14:40:40 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:42:35.287 14:40:40 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:42:35.287 14:40:40 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:42:35.287 14:40:40 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:42:35.287 14:40:40 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:42:35.287 14:40:40 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:42:35.287 14:40:40 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:42:35.287 14:40:40 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:42:35.287 14:40:40 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:42:35.287 14:40:40 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:42:35.287 14:40:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:42:35.287 14:40:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:42:35.287 14:40:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:42:35.287 14:40:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:35.287 14:40:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:35.287 14:40:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:35.287 ************************************ 00:42:35.287 START TEST spdk_target_abort 00:42:35.287 ************************************ 00:42:35.287 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:42:35.287 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:42:35.287 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:42:35.287 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.287 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:35.549 spdk_targetn1 00:42:35.549 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.549 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:35.549 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.549 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:35.549 [2024-11-25 14:40:40.481054] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:35.549 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.549 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:42:35.549 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.549 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:35.549 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.549 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:42:35.549 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.549 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:35.549 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.549 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:42:35.549 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.549 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:35.549 [2024-11-25 14:40:40.533536] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:35.549 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.549 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:42:35.549 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:35.549 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:35.549 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:42:35.549 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:35.549 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:35.549 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:35.549 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:35.549 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:35.550 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:35.550 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:35.550 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:35.550 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:35.550 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:35.550 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:42:35.550 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:35.550 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:35.550 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:35.550 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:35.550 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:35.550 14:40:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:35.811 [2024-11-25 14:40:40.733769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:480 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:42:35.811 [2024-11-25 14:40:40.733824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:003d p:1 m:0 dnr:0 00:42:35.811 [2024-11-25 14:40:40.742588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:704 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:42:35.811 [2024-11-25 14:40:40.742618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0059 p:1 m:0 dnr:0 00:42:35.811 [2024-11-25 14:40:40.757676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1128 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:42:35.811 [2024-11-25 14:40:40.757705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:008e p:1 m:0 dnr:0 00:42:35.811 [2024-11-25 14:40:40.804715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2504 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:42:35.811 [2024-11-25 14:40:40.804748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:42:35.811 [2024-11-25 14:40:40.835715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3456 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:42:35.811 [2024-11-25 14:40:40.835746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00b4 p:0 m:0 dnr:0 00:42:39.116 Initializing NVMe Controllers 00:42:39.116 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:39.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:39.116 Initialization complete. Launching workers. 00:42:39.116 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11198, failed: 5 00:42:39.116 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2271, failed to submit 8932 00:42:39.116 success 747, unsuccessful 1524, failed 0 00:42:39.116 14:40:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:39.116 14:40:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:39.116 [2024-11-25 14:40:43.958367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:608 len:8 PRP1 0x200004e5c000 PRP2 0x0 00:42:39.116 [2024-11-25 14:40:43.958405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:0057 p:1 m:0 dnr:0 00:42:39.116 [2024-11-25 14:40:43.982312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:1152 len:8 PRP1 0x200004e40000 PRP2 0x0 00:42:39.116 [2024-11-25 14:40:43.982336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:009c p:1 m:0 dnr:0 00:42:39.116 [2024-11-25 14:40:44.006170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:1760 len:8 PRP1 0x200004e42000 PRP2 0x0 00:42:39.116 [2024-11-25 14:40:44.006194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:00e1 p:1 m:0 dnr:0 00:42:39.116 [2024-11-25 14:40:44.014413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:1984 len:8 PRP1 0x200004e54000 PRP2 0x0 00:42:39.116 [2024-11-25 14:40:44.014435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:00f9 p:1 m:0 dnr:0 00:42:39.116 [2024-11-25 14:40:44.078313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:3416 len:8 PRP1 0x200004e44000 PRP2 0x0 00:42:39.116 [2024-11-25 14:40:44.078336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:00b3 p:0 m:0 dnr:0 00:42:39.116 [2024-11-25 14:40:44.097348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:4000 len:8 PRP1 0x200004e5c000 PRP2 0x0 00:42:39.116 [2024-11-25 14:40:44.097372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:00fa p:0 m:0 dnr:0 00:42:39.377 [2024-11-25 14:40:44.412378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:11120 len:8 PRP1 0x200004e48000 PRP2 0x0 00:42:39.377 [2024-11-25 14:40:44.412408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:42:39.947 [2024-11-25 14:40:44.958485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:23744 len:8 PRP1 0x200004e54000 PRP2 0x0 00:42:39.947 [2024-11-25 14:40:44.958519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:00a3 p:0 m:0 dnr:0 00:42:42.493 Initializing NVMe Controllers 00:42:42.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:42.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:42.493 Initialization complete. Launching workers. 00:42:42.493 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8692, failed: 8 00:42:42.493 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1219, failed to submit 7481 00:42:42.493 success 340, unsuccessful 879, failed 0 00:42:42.493 14:40:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:42.493 14:40:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:42.754 [2024-11-25 14:40:47.756983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:145 nsid:1 lba:46992 len:8 PRP1 0x200004b16000 PRP2 0x0 00:42:42.754 [2024-11-25 14:40:47.757040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:145 cdw0:0 sqhd:00be p:1 m:0 dnr:0 00:42:43.327 [2024-11-25 14:40:48.182690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:95528 len:8 PRP1 0x200004b26000 PRP2 0x0 00:42:43.327 [2024-11-25 14:40:48.182717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0072 p:1 m:0 dnr:0 00:42:44.269 [2024-11-25 14:40:49.343901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:148 nsid:1 lba:227736 len:8 PRP1 0x200004b28000 PRP2 0x0 00:42:44.269 [2024-11-25 14:40:49.343929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:148 cdw0:0 sqhd:00fa p:1 m:0 dnr:0 00:42:45.651 Initializing NVMe Controllers 00:42:45.651 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:45.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:45.651 Initialization complete. Launching workers. 00:42:45.651 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42897, failed: 3 00:42:45.651 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2760, failed to submit 40140 00:42:45.651 success 598, unsuccessful 2162, failed 0 00:42:45.651 14:40:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:42:45.651 14:40:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.651 14:40:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:45.651 14:40:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.651 14:40:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:42:45.651 14:40:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.651 14:40:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3756860 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3756860 ']' 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3756860 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3756860 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3756860' 00:42:47.561 killing process with pid 3756860 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3756860 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3756860 00:42:47.561 00:42:47.561 real 0m12.224s 00:42:47.561 user 0m49.777s 00:42:47.561 sys 0m2.099s 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:47.561 ************************************ 00:42:47.561 END TEST spdk_target_abort 00:42:47.561 ************************************ 00:42:47.561 14:40:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:42:47.561 14:40:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:47.561 14:40:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:47.561 14:40:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:47.561 ************************************ 00:42:47.561 START TEST kernel_target_abort 00:42:47.561 ************************************ 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:42:47.561 14:40:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:47.562 14:40:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:47.562 14:40:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:42:47.562 14:40:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:42:47.562 14:40:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:42:47.562 14:40:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:42:47.562 14:40:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:42:47.562 14:40:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:50.857 Waiting for block devices as requested 00:42:50.857 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:51.118 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:51.118 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:51.118 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:51.379 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:51.379 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:51.379 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:51.639 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:51.639 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:42:51.900 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:51.900 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:51.900 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:52.160 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:52.160 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:52.160 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:52.420 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:52.420 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:52.680 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:42:52.680 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:42:52.680 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:42:52.680 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:42:52.680 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:52.680 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:42:52.680 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:42:52.680 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:42:52.680 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:42:52.680 No valid GPT data, bailing 00:42:52.680 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:42:52.680 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:42:52.680 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:42:52.680 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:42:52.680 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:42:52.680 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:52.680 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:42:52.941 00:42:52.941 Discovery Log Number of Records 2, Generation counter 2 00:42:52.941 =====Discovery Log Entry 0====== 00:42:52.941 trtype: tcp 00:42:52.941 adrfam: ipv4 00:42:52.941 subtype: current discovery subsystem 00:42:52.941 treq: not specified, sq flow control disable supported 00:42:52.941 portid: 1 00:42:52.941 trsvcid: 4420 00:42:52.941 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:42:52.941 traddr: 10.0.0.1 00:42:52.941 eflags: none 00:42:52.941 sectype: none 00:42:52.941 =====Discovery Log Entry 1====== 00:42:52.941 trtype: tcp 00:42:52.941 adrfam: ipv4 00:42:52.941 subtype: nvme subsystem 00:42:52.941 treq: not specified, sq flow control disable supported 00:42:52.941 portid: 1 00:42:52.941 trsvcid: 4420 00:42:52.941 subnqn: nqn.2016-06.io.spdk:testnqn 00:42:52.941 traddr: 10.0.0.1 00:42:52.941 eflags: none 00:42:52.941 sectype: none 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:52.941 14:40:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:56.241 Initializing NVMe Controllers 00:42:56.241 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:56.241 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:56.241 Initialization complete. Launching workers. 00:42:56.241 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67372, failed: 0 00:42:56.241 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67372, failed to submit 0 00:42:56.241 success 0, unsuccessful 67372, failed 0 00:42:56.241 14:41:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:56.241 14:41:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:59.542 Initializing NVMe Controllers 00:42:59.542 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:59.542 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:59.542 Initialization complete. Launching workers. 00:42:59.542 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 113991, failed: 0 00:42:59.542 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28734, failed to submit 85257 00:42:59.542 success 0, unsuccessful 28734, failed 0 00:42:59.542 14:41:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:59.542 14:41:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:02.840 Initializing NVMe Controllers 00:43:02.840 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:02.840 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:02.840 Initialization complete. Launching workers. 00:43:02.840 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146756, failed: 0 00:43:02.840 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36746, failed to submit 110010 00:43:02.840 success 0, unsuccessful 36746, failed 0 00:43:02.840 14:41:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:43:02.840 14:41:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:43:02.840 14:41:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:43:02.840 14:41:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:02.840 14:41:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:02.840 14:41:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:43:02.840 14:41:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:02.840 14:41:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:43:02.840 14:41:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:43:02.840 14:41:07 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:06.139 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:43:06.139 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:43:06.139 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:43:06.139 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:43:06.140 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:43:06.140 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:43:06.140 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:43:06.140 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:43:06.140 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:43:06.140 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:43:06.140 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:43:06.140 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:43:06.140 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:43:06.140 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:43:06.140 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:43:06.140 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:43:07.525 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:43:08.098 00:43:08.098 real 0m20.416s 00:43:08.098 user 0m10.082s 00:43:08.098 sys 0m5.958s 00:43:08.098 14:41:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:08.098 14:41:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:08.098 ************************************ 00:43:08.098 END TEST kernel_target_abort 00:43:08.098 ************************************ 00:43:08.098 14:41:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:43:08.098 14:41:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:43:08.098 14:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:08.098 14:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:43:08.098 14:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:08.098 14:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:43:08.098 14:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:08.098 14:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:08.098 rmmod nvme_tcp 00:43:08.098 rmmod nvme_fabrics 00:43:08.098 rmmod nvme_keyring 00:43:08.098 14:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:08.098 14:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:43:08.098 14:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:43:08.098 14:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3756860 ']' 00:43:08.098 14:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3756860 00:43:08.098 14:41:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3756860 ']' 00:43:08.098 14:41:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3756860 00:43:08.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3756860) - No such process 00:43:08.098 14:41:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3756860 is not found' 00:43:08.098 Process with pid 3756860 is not found 00:43:08.098 14:41:12 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:43:08.098 14:41:13 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:11.398 Waiting for block devices as requested 00:43:11.398 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:43:11.398 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:43:11.398 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:43:11.658 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:43:11.658 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:43:11.658 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:43:11.917 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:43:11.917 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:43:11.917 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:43:12.177 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:43:12.177 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:43:12.437 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:43:12.437 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:43:12.437 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:43:12.697 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:43:12.697 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:43:12.697 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:43:12.956 14:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:12.957 14:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:12.957 14:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:43:12.957 14:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:43:12.957 14:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:12.957 14:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:43:13.217 14:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:13.217 14:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:13.217 14:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:13.217 14:41:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:13.217 14:41:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:15.128 14:41:20 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:15.128 00:43:15.128 real 0m52.754s 00:43:15.128 user 1m5.329s 00:43:15.128 sys 0m19.285s 00:43:15.128 14:41:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:15.128 14:41:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:15.128 ************************************ 00:43:15.128 END TEST nvmf_abort_qd_sizes 00:43:15.128 ************************************ 00:43:15.128 14:41:20 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:15.128 14:41:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:15.128 14:41:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:15.128 14:41:20 -- common/autotest_common.sh@10 -- # set +x 00:43:15.128 ************************************ 00:43:15.128 START TEST keyring_file 00:43:15.128 ************************************ 00:43:15.128 14:41:20 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:15.389 * Looking for test storage... 00:43:15.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:15.389 14:41:20 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:43:15.389 14:41:20 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:43:15.389 14:41:20 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:43:15.389 14:41:20 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@345 -- # : 1 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@353 -- # local d=1 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@355 -- # echo 1 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@353 -- # local d=2 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@355 -- # echo 2 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@368 -- # return 0 00:43:15.389 14:41:20 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:15.389 14:41:20 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:43:15.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:15.389 --rc genhtml_branch_coverage=1 00:43:15.389 --rc genhtml_function_coverage=1 00:43:15.389 --rc genhtml_legend=1 00:43:15.389 --rc geninfo_all_blocks=1 00:43:15.389 --rc geninfo_unexecuted_blocks=1 00:43:15.389 00:43:15.389 ' 00:43:15.389 14:41:20 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:43:15.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:15.389 --rc genhtml_branch_coverage=1 00:43:15.389 --rc genhtml_function_coverage=1 00:43:15.389 --rc genhtml_legend=1 00:43:15.389 --rc geninfo_all_blocks=1 00:43:15.389 --rc geninfo_unexecuted_blocks=1 00:43:15.389 00:43:15.389 ' 00:43:15.389 14:41:20 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:43:15.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:15.389 --rc genhtml_branch_coverage=1 00:43:15.389 --rc genhtml_function_coverage=1 00:43:15.389 --rc genhtml_legend=1 00:43:15.389 --rc geninfo_all_blocks=1 00:43:15.389 --rc geninfo_unexecuted_blocks=1 00:43:15.389 00:43:15.389 ' 00:43:15.389 14:41:20 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:43:15.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:15.389 --rc genhtml_branch_coverage=1 00:43:15.389 --rc genhtml_function_coverage=1 00:43:15.389 --rc genhtml_legend=1 00:43:15.389 --rc geninfo_all_blocks=1 00:43:15.389 --rc geninfo_unexecuted_blocks=1 00:43:15.389 00:43:15.389 ' 00:43:15.389 14:41:20 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:15.389 14:41:20 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:15.389 14:41:20 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:43:15.389 14:41:20 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:15.389 14:41:20 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:15.389 14:41:20 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:15.389 14:41:20 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:15.389 14:41:20 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:15.389 14:41:20 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:15.389 14:41:20 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:15.389 14:41:20 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:15.389 14:41:20 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:15.389 14:41:20 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:15.389 14:41:20 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:15.389 14:41:20 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:15.389 14:41:20 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:15.389 14:41:20 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:15.389 14:41:20 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:15.389 14:41:20 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:15.389 14:41:20 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:15.389 14:41:20 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:15.390 14:41:20 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:15.390 14:41:20 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:15.390 14:41:20 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:15.390 14:41:20 keyring_file -- paths/export.sh@5 -- # export PATH 00:43:15.390 14:41:20 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:15.390 14:41:20 keyring_file -- nvmf/common.sh@51 -- # : 0 00:43:15.390 14:41:20 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:15.390 14:41:20 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:15.390 14:41:20 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:15.390 14:41:20 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:15.390 14:41:20 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:15.390 14:41:20 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:15.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:15.390 14:41:20 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:15.390 14:41:20 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:15.390 14:41:20 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:15.390 14:41:20 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:15.390 14:41:20 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:15.390 14:41:20 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:15.390 14:41:20 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:43:15.390 14:41:20 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:43:15.390 14:41:20 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:43:15.390 14:41:20 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:15.390 14:41:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:15.390 14:41:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:15.390 14:41:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:15.390 14:41:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:15.390 14:41:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:15.390 14:41:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uczbZnzu6D 00:43:15.390 14:41:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:15.390 14:41:20 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:15.390 14:41:20 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:43:15.390 14:41:20 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:15.390 14:41:20 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:43:15.390 14:41:20 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:43:15.390 14:41:20 keyring_file -- nvmf/common.sh@733 -- # python - 00:43:15.650 14:41:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uczbZnzu6D 00:43:15.650 14:41:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uczbZnzu6D 00:43:15.650 14:41:20 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.uczbZnzu6D 00:43:15.650 14:41:20 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:43:15.650 14:41:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:15.650 14:41:20 keyring_file -- keyring/common.sh@17 -- # name=key1 00:43:15.650 14:41:20 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:15.650 14:41:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:15.650 14:41:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:15.650 14:41:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.G4nqjdOnwI 00:43:15.650 14:41:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:15.650 14:41:20 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:15.650 14:41:20 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:43:15.650 14:41:20 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:15.650 14:41:20 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:43:15.650 14:41:20 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:43:15.650 14:41:20 keyring_file -- nvmf/common.sh@733 -- # python - 00:43:15.650 14:41:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.G4nqjdOnwI 00:43:15.650 14:41:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.G4nqjdOnwI 00:43:15.650 14:41:20 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.G4nqjdOnwI 00:43:15.650 14:41:20 keyring_file -- keyring/file.sh@30 -- # tgtpid=3767046 00:43:15.650 14:41:20 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3767046 00:43:15.650 14:41:20 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:15.650 14:41:20 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3767046 ']' 00:43:15.650 14:41:20 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:15.650 14:41:20 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:15.650 14:41:20 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:15.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:15.650 14:41:20 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:15.650 14:41:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:15.650 [2024-11-25 14:41:20.634030] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:43:15.650 [2024-11-25 14:41:20.634111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3767046 ] 00:43:15.650 [2024-11-25 14:41:20.726356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:15.910 [2024-11-25 14:41:20.779058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:16.480 14:41:21 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:16.480 14:41:21 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:43:16.480 14:41:21 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:43:16.480 14:41:21 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.480 14:41:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:16.480 [2024-11-25 14:41:21.437729] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:16.480 null0 00:43:16.480 [2024-11-25 14:41:21.469767] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:16.480 [2024-11-25 14:41:21.470337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:16.480 14:41:21 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.480 14:41:21 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:16.480 14:41:21 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:16.480 14:41:21 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:16.480 14:41:21 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:43:16.480 14:41:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:16.480 14:41:21 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:43:16.480 14:41:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:16.480 14:41:21 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:16.480 14:41:21 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.480 14:41:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:16.480 [2024-11-25 14:41:21.501835] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:43:16.480 request: 00:43:16.480 { 00:43:16.480 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:43:16.480 "secure_channel": false, 00:43:16.480 "listen_address": { 00:43:16.480 "trtype": "tcp", 00:43:16.480 "traddr": "127.0.0.1", 00:43:16.480 "trsvcid": "4420" 00:43:16.480 }, 00:43:16.480 "method": "nvmf_subsystem_add_listener", 00:43:16.480 "req_id": 1 00:43:16.480 } 00:43:16.480 Got JSON-RPC error response 00:43:16.480 response: 00:43:16.480 { 00:43:16.480 "code": -32602, 00:43:16.480 "message": "Invalid parameters" 00:43:16.480 } 00:43:16.480 14:41:21 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:43:16.480 14:41:21 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:16.480 14:41:21 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:16.480 14:41:21 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:16.480 14:41:21 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:16.480 14:41:21 keyring_file -- keyring/file.sh@47 -- # bperfpid=3767086 00:43:16.480 14:41:21 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3767086 /var/tmp/bperf.sock 00:43:16.480 14:41:21 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:43:16.480 14:41:21 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3767086 ']' 00:43:16.480 14:41:21 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:16.480 14:41:21 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:16.480 14:41:21 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:16.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:16.480 14:41:21 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:16.480 14:41:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:16.480 [2024-11-25 14:41:21.563611] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:43:16.480 [2024-11-25 14:41:21.563678] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3767086 ] 00:43:16.760 [2024-11-25 14:41:21.654571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:16.760 [2024-11-25 14:41:21.707899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:17.373 14:41:22 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:17.373 14:41:22 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:43:17.373 14:41:22 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uczbZnzu6D 00:43:17.373 14:41:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uczbZnzu6D 00:43:17.637 14:41:22 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.G4nqjdOnwI 00:43:17.637 14:41:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.G4nqjdOnwI 00:43:17.897 14:41:22 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:43:17.897 14:41:22 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:43:17.897 14:41:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:17.897 14:41:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:17.897 14:41:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:17.897 14:41:22 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.uczbZnzu6D == \/\t\m\p\/\t\m\p\.\u\c\z\b\Z\n\z\u\6\D ]] 00:43:17.897 14:41:22 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:43:17.897 14:41:22 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:43:17.897 14:41:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:17.897 14:41:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:17.897 14:41:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:18.157 14:41:23 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.G4nqjdOnwI == \/\t\m\p\/\t\m\p\.\G\4\n\q\j\d\O\n\w\I ]] 00:43:18.157 14:41:23 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:43:18.157 14:41:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:18.157 14:41:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:18.157 14:41:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:18.157 14:41:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:18.157 14:41:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:18.418 14:41:23 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:43:18.418 14:41:23 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:43:18.418 14:41:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:18.418 14:41:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:18.418 14:41:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:18.418 14:41:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:18.418 14:41:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:18.678 14:41:23 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:43:18.678 14:41:23 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:18.678 14:41:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:18.678 [2024-11-25 14:41:23.716813] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:18.939 nvme0n1 00:43:18.939 14:41:23 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:43:18.939 14:41:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:18.939 14:41:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:18.939 14:41:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:18.939 14:41:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:18.939 14:41:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:18.939 14:41:24 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:43:18.939 14:41:24 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:43:18.939 14:41:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:18.939 14:41:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:18.939 14:41:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:18.939 14:41:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:18.939 14:41:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:19.200 14:41:24 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:43:19.200 14:41:24 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:19.461 Running I/O for 1 seconds... 00:43:20.401 18958.00 IOPS, 74.05 MiB/s 00:43:20.401 Latency(us) 00:43:20.401 [2024-11-25T13:41:25.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:20.401 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:43:20.401 nvme0n1 : 1.00 19011.19 74.26 0.00 0.00 6719.83 3904.85 14417.92 00:43:20.401 [2024-11-25T13:41:25.491Z] =================================================================================================================== 00:43:20.401 [2024-11-25T13:41:25.491Z] Total : 19011.19 74.26 0.00 0.00 6719.83 3904.85 14417.92 00:43:20.401 { 00:43:20.401 "results": [ 00:43:20.401 { 00:43:20.401 "job": "nvme0n1", 00:43:20.401 "core_mask": "0x2", 00:43:20.401 "workload": "randrw", 00:43:20.401 "percentage": 50, 00:43:20.401 "status": "finished", 00:43:20.401 "queue_depth": 128, 00:43:20.401 "io_size": 4096, 00:43:20.401 "runtime": 1.003935, 00:43:20.401 "iops": 19011.190963558398, 00:43:20.401 "mibps": 74.26246470139999, 00:43:20.401 "io_failed": 0, 00:43:20.401 "io_timeout": 0, 00:43:20.401 "avg_latency_us": 6719.830012923959, 00:43:20.401 "min_latency_us": 3904.8533333333335, 00:43:20.401 "max_latency_us": 14417.92 00:43:20.401 } 00:43:20.401 ], 00:43:20.401 "core_count": 1 00:43:20.401 } 00:43:20.401 14:41:25 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:20.401 14:41:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:20.661 14:41:25 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:43:20.661 14:41:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:20.661 14:41:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:20.661 14:41:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:20.661 14:41:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:20.661 14:41:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:20.661 14:41:25 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:43:20.661 14:41:25 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:43:20.661 14:41:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:20.661 14:41:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:20.661 14:41:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:20.661 14:41:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:20.662 14:41:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:20.921 14:41:25 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:43:20.921 14:41:25 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:20.921 14:41:25 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:20.921 14:41:25 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:20.921 14:41:25 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:20.921 14:41:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:20.921 14:41:25 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:20.921 14:41:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:20.921 14:41:25 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:20.921 14:41:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:21.182 [2024-11-25 14:41:26.053359] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:21.182 [2024-11-25 14:41:26.053832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1452c70 (107): Transport endpoint is not connected 00:43:21.182 [2024-11-25 14:41:26.054828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1452c70 (9): Bad file descriptor 00:43:21.182 [2024-11-25 14:41:26.055830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:43:21.182 [2024-11-25 14:41:26.055839] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:21.182 [2024-11-25 14:41:26.055844] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:43:21.182 [2024-11-25 14:41:26.055851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:43:21.182 request: 00:43:21.182 { 00:43:21.182 "name": "nvme0", 00:43:21.182 "trtype": "tcp", 00:43:21.182 "traddr": "127.0.0.1", 00:43:21.182 "adrfam": "ipv4", 00:43:21.182 "trsvcid": "4420", 00:43:21.182 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:21.182 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:21.182 "prchk_reftag": false, 00:43:21.182 "prchk_guard": false, 00:43:21.182 "hdgst": false, 00:43:21.182 "ddgst": false, 00:43:21.182 "psk": "key1", 00:43:21.182 "allow_unrecognized_csi": false, 00:43:21.182 "method": "bdev_nvme_attach_controller", 00:43:21.182 "req_id": 1 00:43:21.182 } 00:43:21.182 Got JSON-RPC error response 00:43:21.182 response: 00:43:21.182 { 00:43:21.182 "code": -5, 00:43:21.182 "message": "Input/output error" 00:43:21.182 } 00:43:21.182 14:41:26 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:21.182 14:41:26 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:21.182 14:41:26 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:21.182 14:41:26 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:21.182 14:41:26 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:43:21.182 14:41:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:21.182 14:41:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:21.182 14:41:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:21.182 14:41:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:21.182 14:41:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:21.182 14:41:26 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:43:21.182 14:41:26 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:43:21.182 14:41:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:21.182 14:41:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:21.182 14:41:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:21.182 14:41:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:21.182 14:41:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:21.443 14:41:26 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:43:21.443 14:41:26 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:43:21.443 14:41:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:21.703 14:41:26 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:43:21.703 14:41:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:43:21.703 14:41:26 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:43:21.703 14:41:26 keyring_file -- keyring/file.sh@78 -- # jq length 00:43:21.703 14:41:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:21.962 14:41:26 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:43:21.962 14:41:26 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.uczbZnzu6D 00:43:21.962 14:41:26 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.uczbZnzu6D 00:43:21.962 14:41:26 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:21.962 14:41:26 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.uczbZnzu6D 00:43:21.962 14:41:26 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:21.962 14:41:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:21.962 14:41:26 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:21.962 14:41:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:21.962 14:41:26 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uczbZnzu6D 00:43:21.962 14:41:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uczbZnzu6D 00:43:22.222 [2024-11-25 14:41:27.063457] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.uczbZnzu6D': 0100660 00:43:22.222 [2024-11-25 14:41:27.063477] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:43:22.222 request: 00:43:22.222 { 00:43:22.222 "name": "key0", 00:43:22.222 "path": "/tmp/tmp.uczbZnzu6D", 00:43:22.222 "method": "keyring_file_add_key", 00:43:22.222 "req_id": 1 00:43:22.222 } 00:43:22.222 Got JSON-RPC error response 00:43:22.222 response: 00:43:22.222 { 00:43:22.222 "code": -1, 00:43:22.222 "message": "Operation not permitted" 00:43:22.222 } 00:43:22.222 14:41:27 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:22.222 14:41:27 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:22.222 14:41:27 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:22.222 14:41:27 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:22.222 14:41:27 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.uczbZnzu6D 00:43:22.222 14:41:27 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uczbZnzu6D 00:43:22.222 14:41:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uczbZnzu6D 00:43:22.222 14:41:27 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.uczbZnzu6D 00:43:22.222 14:41:27 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:43:22.222 14:41:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:22.222 14:41:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:22.222 14:41:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:22.222 14:41:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:22.222 14:41:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:22.482 14:41:27 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:43:22.482 14:41:27 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:22.482 14:41:27 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:22.482 14:41:27 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:22.482 14:41:27 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:22.482 14:41:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:22.482 14:41:27 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:22.482 14:41:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:22.482 14:41:27 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:22.482 14:41:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:22.742 [2024-11-25 14:41:27.588792] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.uczbZnzu6D': No such file or directory 00:43:22.742 [2024-11-25 14:41:27.588805] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:43:22.742 [2024-11-25 14:41:27.588818] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:43:22.742 [2024-11-25 14:41:27.588823] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:43:22.742 [2024-11-25 14:41:27.588828] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:43:22.742 [2024-11-25 14:41:27.588833] bdev_nvme.c:6764:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:43:22.742 request: 00:43:22.742 { 00:43:22.742 "name": "nvme0", 00:43:22.742 "trtype": "tcp", 00:43:22.742 "traddr": "127.0.0.1", 00:43:22.742 "adrfam": "ipv4", 00:43:22.742 "trsvcid": "4420", 00:43:22.742 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:22.742 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:22.742 "prchk_reftag": false, 00:43:22.742 "prchk_guard": false, 00:43:22.742 "hdgst": false, 00:43:22.742 "ddgst": false, 00:43:22.742 "psk": "key0", 00:43:22.742 "allow_unrecognized_csi": false, 00:43:22.742 "method": "bdev_nvme_attach_controller", 00:43:22.742 "req_id": 1 00:43:22.742 } 00:43:22.742 Got JSON-RPC error response 00:43:22.742 response: 00:43:22.742 { 00:43:22.742 "code": -19, 00:43:22.742 "message": "No such device" 00:43:22.742 } 00:43:22.742 14:41:27 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:22.742 14:41:27 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:22.742 14:41:27 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:22.742 14:41:27 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:22.742 14:41:27 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:43:22.742 14:41:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:22.742 14:41:27 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:22.742 14:41:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:22.742 14:41:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:22.742 14:41:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:22.742 14:41:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:22.742 14:41:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:22.742 14:41:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.IDTb6Arpxf 00:43:22.742 14:41:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:22.742 14:41:27 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:22.742 14:41:27 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:43:22.742 14:41:27 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:22.742 14:41:27 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:43:22.742 14:41:27 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:43:22.742 14:41:27 keyring_file -- nvmf/common.sh@733 -- # python - 00:43:22.742 14:41:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.IDTb6Arpxf 00:43:22.742 14:41:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.IDTb6Arpxf 00:43:22.742 14:41:27 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.IDTb6Arpxf 00:43:22.742 14:41:27 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IDTb6Arpxf 00:43:22.742 14:41:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IDTb6Arpxf 00:43:23.002 14:41:27 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:23.002 14:41:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:23.261 nvme0n1 00:43:23.261 14:41:28 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:43:23.261 14:41:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:23.261 14:41:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:23.261 14:41:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:23.261 14:41:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:23.261 14:41:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:23.522 14:41:28 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:43:23.522 14:41:28 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:43:23.522 14:41:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:23.783 14:41:28 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:43:23.783 14:41:28 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:43:23.783 14:41:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:23.783 14:41:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:23.783 14:41:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:23.783 14:41:28 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:43:23.783 14:41:28 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:43:23.783 14:41:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:23.783 14:41:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:23.783 14:41:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:23.783 14:41:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:23.783 14:41:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:24.044 14:41:28 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:43:24.044 14:41:28 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:24.044 14:41:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:24.303 14:41:29 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:43:24.303 14:41:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:24.303 14:41:29 keyring_file -- keyring/file.sh@105 -- # jq length 00:43:24.303 14:41:29 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:43:24.303 14:41:29 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IDTb6Arpxf 00:43:24.303 14:41:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IDTb6Arpxf 00:43:24.563 14:41:29 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.G4nqjdOnwI 00:43:24.563 14:41:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.G4nqjdOnwI 00:43:24.823 14:41:29 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:24.824 14:41:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:24.824 nvme0n1 00:43:25.084 14:41:29 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:43:25.084 14:41:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:43:25.084 14:41:30 keyring_file -- keyring/file.sh@113 -- # config='{ 00:43:25.084 "subsystems": [ 00:43:25.084 { 00:43:25.084 "subsystem": "keyring", 00:43:25.084 "config": [ 00:43:25.084 { 00:43:25.084 "method": "keyring_file_add_key", 00:43:25.084 "params": { 00:43:25.084 "name": "key0", 00:43:25.084 "path": "/tmp/tmp.IDTb6Arpxf" 00:43:25.084 } 00:43:25.084 }, 00:43:25.084 { 00:43:25.084 "method": "keyring_file_add_key", 00:43:25.084 "params": { 00:43:25.084 "name": "key1", 00:43:25.084 "path": "/tmp/tmp.G4nqjdOnwI" 00:43:25.084 } 00:43:25.084 } 00:43:25.084 ] 00:43:25.084 }, 00:43:25.084 { 00:43:25.084 "subsystem": "iobuf", 00:43:25.084 "config": [ 00:43:25.084 { 00:43:25.084 "method": "iobuf_set_options", 00:43:25.084 "params": { 00:43:25.084 "small_pool_count": 8192, 00:43:25.084 "large_pool_count": 1024, 00:43:25.084 "small_bufsize": 8192, 00:43:25.084 "large_bufsize": 135168, 00:43:25.084 "enable_numa": false 00:43:25.084 } 00:43:25.084 } 00:43:25.084 ] 00:43:25.084 }, 00:43:25.084 { 00:43:25.084 "subsystem": "sock", 00:43:25.084 "config": [ 00:43:25.084 { 00:43:25.084 "method": "sock_set_default_impl", 00:43:25.084 "params": { 00:43:25.084 "impl_name": "posix" 00:43:25.084 } 00:43:25.084 }, 00:43:25.084 { 00:43:25.084 "method": "sock_impl_set_options", 00:43:25.084 "params": { 00:43:25.084 "impl_name": "ssl", 00:43:25.084 "recv_buf_size": 4096, 00:43:25.084 "send_buf_size": 4096, 00:43:25.084 "enable_recv_pipe": true, 00:43:25.084 "enable_quickack": false, 00:43:25.084 "enable_placement_id": 0, 00:43:25.084 "enable_zerocopy_send_server": true, 00:43:25.084 "enable_zerocopy_send_client": false, 00:43:25.084 "zerocopy_threshold": 0, 00:43:25.084 "tls_version": 0, 00:43:25.084 "enable_ktls": false 00:43:25.084 } 00:43:25.084 }, 00:43:25.084 { 00:43:25.084 "method": "sock_impl_set_options", 00:43:25.084 "params": { 00:43:25.084 "impl_name": "posix", 00:43:25.084 "recv_buf_size": 2097152, 00:43:25.084 "send_buf_size": 2097152, 00:43:25.084 "enable_recv_pipe": true, 00:43:25.084 "enable_quickack": false, 00:43:25.084 "enable_placement_id": 0, 00:43:25.084 "enable_zerocopy_send_server": true, 00:43:25.084 "enable_zerocopy_send_client": false, 00:43:25.084 "zerocopy_threshold": 0, 00:43:25.084 "tls_version": 0, 00:43:25.084 "enable_ktls": false 00:43:25.084 } 00:43:25.084 } 00:43:25.084 ] 00:43:25.084 }, 00:43:25.084 { 00:43:25.084 "subsystem": "vmd", 00:43:25.084 "config": [] 00:43:25.084 }, 00:43:25.084 { 00:43:25.084 "subsystem": "accel", 00:43:25.084 "config": [ 00:43:25.084 { 00:43:25.084 "method": "accel_set_options", 00:43:25.084 "params": { 00:43:25.084 "small_cache_size": 128, 00:43:25.084 "large_cache_size": 16, 00:43:25.084 "task_count": 2048, 00:43:25.084 "sequence_count": 2048, 00:43:25.085 "buf_count": 2048 00:43:25.085 } 00:43:25.085 } 00:43:25.085 ] 00:43:25.085 }, 00:43:25.085 { 00:43:25.085 "subsystem": "bdev", 00:43:25.085 "config": [ 00:43:25.085 { 00:43:25.085 "method": "bdev_set_options", 00:43:25.085 "params": { 00:43:25.085 "bdev_io_pool_size": 65535, 00:43:25.085 "bdev_io_cache_size": 256, 00:43:25.085 "bdev_auto_examine": true, 00:43:25.085 "iobuf_small_cache_size": 128, 00:43:25.085 "iobuf_large_cache_size": 16 00:43:25.085 } 00:43:25.085 }, 00:43:25.085 { 00:43:25.085 "method": "bdev_raid_set_options", 00:43:25.085 "params": { 00:43:25.085 "process_window_size_kb": 1024, 00:43:25.085 "process_max_bandwidth_mb_sec": 0 00:43:25.085 } 00:43:25.085 }, 00:43:25.085 { 00:43:25.085 "method": "bdev_iscsi_set_options", 00:43:25.085 "params": { 00:43:25.085 "timeout_sec": 30 00:43:25.085 } 00:43:25.085 }, 00:43:25.085 { 00:43:25.085 "method": "bdev_nvme_set_options", 00:43:25.085 "params": { 00:43:25.085 "action_on_timeout": "none", 00:43:25.085 "timeout_us": 0, 00:43:25.085 "timeout_admin_us": 0, 00:43:25.085 "keep_alive_timeout_ms": 10000, 00:43:25.085 "arbitration_burst": 0, 00:43:25.085 "low_priority_weight": 0, 00:43:25.085 "medium_priority_weight": 0, 00:43:25.085 "high_priority_weight": 0, 00:43:25.085 "nvme_adminq_poll_period_us": 10000, 00:43:25.085 "nvme_ioq_poll_period_us": 0, 00:43:25.085 "io_queue_requests": 512, 00:43:25.085 "delay_cmd_submit": true, 00:43:25.085 "transport_retry_count": 4, 00:43:25.085 "bdev_retry_count": 3, 00:43:25.085 "transport_ack_timeout": 0, 00:43:25.085 "ctrlr_loss_timeout_sec": 0, 00:43:25.085 "reconnect_delay_sec": 0, 00:43:25.085 "fast_io_fail_timeout_sec": 0, 00:43:25.085 "disable_auto_failback": false, 00:43:25.085 "generate_uuids": false, 00:43:25.085 "transport_tos": 0, 00:43:25.085 "nvme_error_stat": false, 00:43:25.085 "rdma_srq_size": 0, 00:43:25.085 "io_path_stat": false, 00:43:25.085 "allow_accel_sequence": false, 00:43:25.085 "rdma_max_cq_size": 0, 00:43:25.085 "rdma_cm_event_timeout_ms": 0, 00:43:25.085 "dhchap_digests": [ 00:43:25.085 "sha256", 00:43:25.085 "sha384", 00:43:25.085 "sha512" 00:43:25.085 ], 00:43:25.085 "dhchap_dhgroups": [ 00:43:25.085 "null", 00:43:25.085 "ffdhe2048", 00:43:25.085 "ffdhe3072", 00:43:25.085 "ffdhe4096", 00:43:25.085 "ffdhe6144", 00:43:25.085 "ffdhe8192" 00:43:25.085 ] 00:43:25.085 } 00:43:25.085 }, 00:43:25.085 { 00:43:25.085 "method": "bdev_nvme_attach_controller", 00:43:25.085 "params": { 00:43:25.085 "name": "nvme0", 00:43:25.085 "trtype": "TCP", 00:43:25.085 "adrfam": "IPv4", 00:43:25.085 "traddr": "127.0.0.1", 00:43:25.085 "trsvcid": "4420", 00:43:25.085 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:25.085 "prchk_reftag": false, 00:43:25.085 "prchk_guard": false, 00:43:25.085 "ctrlr_loss_timeout_sec": 0, 00:43:25.085 "reconnect_delay_sec": 0, 00:43:25.085 "fast_io_fail_timeout_sec": 0, 00:43:25.085 "psk": "key0", 00:43:25.085 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:25.085 "hdgst": false, 00:43:25.085 "ddgst": false, 00:43:25.085 "multipath": "multipath" 00:43:25.085 } 00:43:25.085 }, 00:43:25.085 { 00:43:25.085 "method": "bdev_nvme_set_hotplug", 00:43:25.085 "params": { 00:43:25.085 "period_us": 100000, 00:43:25.085 "enable": false 00:43:25.085 } 00:43:25.085 }, 00:43:25.085 { 00:43:25.085 "method": "bdev_wait_for_examine" 00:43:25.085 } 00:43:25.085 ] 00:43:25.085 }, 00:43:25.085 { 00:43:25.085 "subsystem": "nbd", 00:43:25.085 "config": [] 00:43:25.085 } 00:43:25.085 ] 00:43:25.085 }' 00:43:25.085 14:41:30 keyring_file -- keyring/file.sh@115 -- # killprocess 3767086 00:43:25.085 14:41:30 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3767086 ']' 00:43:25.085 14:41:30 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3767086 00:43:25.085 14:41:30 keyring_file -- common/autotest_common.sh@959 -- # uname 00:43:25.085 14:41:30 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:25.085 14:41:30 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3767086 00:43:25.346 14:41:30 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:25.346 14:41:30 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:25.346 14:41:30 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3767086' 00:43:25.346 killing process with pid 3767086 00:43:25.346 14:41:30 keyring_file -- common/autotest_common.sh@973 -- # kill 3767086 00:43:25.346 Received shutdown signal, test time was about 1.000000 seconds 00:43:25.346 00:43:25.346 Latency(us) 00:43:25.346 [2024-11-25T13:41:30.436Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:25.346 [2024-11-25T13:41:30.436Z] =================================================================================================================== 00:43:25.346 [2024-11-25T13:41:30.436Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:25.346 14:41:30 keyring_file -- common/autotest_common.sh@978 -- # wait 3767086 00:43:25.346 14:41:30 keyring_file -- keyring/file.sh@118 -- # bperfpid=3768906 00:43:25.346 14:41:30 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3768906 /var/tmp/bperf.sock 00:43:25.346 14:41:30 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3768906 ']' 00:43:25.346 14:41:30 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:25.346 14:41:30 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:43:25.346 14:41:30 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:25.346 14:41:30 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:25.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:25.347 14:41:30 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:25.347 14:41:30 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:43:25.347 "subsystems": [ 00:43:25.347 { 00:43:25.347 "subsystem": "keyring", 00:43:25.347 "config": [ 00:43:25.347 { 00:43:25.347 "method": "keyring_file_add_key", 00:43:25.347 "params": { 00:43:25.347 "name": "key0", 00:43:25.347 "path": "/tmp/tmp.IDTb6Arpxf" 00:43:25.347 } 00:43:25.347 }, 00:43:25.347 { 00:43:25.347 "method": "keyring_file_add_key", 00:43:25.347 "params": { 00:43:25.347 "name": "key1", 00:43:25.347 "path": "/tmp/tmp.G4nqjdOnwI" 00:43:25.347 } 00:43:25.347 } 00:43:25.347 ] 00:43:25.347 }, 00:43:25.347 { 00:43:25.347 "subsystem": "iobuf", 00:43:25.347 "config": [ 00:43:25.347 { 00:43:25.347 "method": "iobuf_set_options", 00:43:25.347 "params": { 00:43:25.347 "small_pool_count": 8192, 00:43:25.347 "large_pool_count": 1024, 00:43:25.347 "small_bufsize": 8192, 00:43:25.347 "large_bufsize": 135168, 00:43:25.347 "enable_numa": false 00:43:25.347 } 00:43:25.347 } 00:43:25.347 ] 00:43:25.347 }, 00:43:25.347 { 00:43:25.347 "subsystem": "sock", 00:43:25.347 "config": [ 00:43:25.347 { 00:43:25.347 "method": "sock_set_default_impl", 00:43:25.347 "params": { 00:43:25.347 "impl_name": "posix" 00:43:25.347 } 00:43:25.347 }, 00:43:25.347 { 00:43:25.347 "method": "sock_impl_set_options", 00:43:25.347 "params": { 00:43:25.347 "impl_name": "ssl", 00:43:25.347 "recv_buf_size": 4096, 00:43:25.347 "send_buf_size": 4096, 00:43:25.347 "enable_recv_pipe": true, 00:43:25.347 "enable_quickack": false, 00:43:25.347 "enable_placement_id": 0, 00:43:25.347 "enable_zerocopy_send_server": true, 00:43:25.347 "enable_zerocopy_send_client": false, 00:43:25.347 "zerocopy_threshold": 0, 00:43:25.347 "tls_version": 0, 00:43:25.347 "enable_ktls": false 00:43:25.347 } 00:43:25.347 }, 00:43:25.347 { 00:43:25.347 "method": "sock_impl_set_options", 00:43:25.347 "params": { 00:43:25.347 "impl_name": "posix", 00:43:25.347 "recv_buf_size": 2097152, 00:43:25.347 "send_buf_size": 2097152, 00:43:25.347 "enable_recv_pipe": true, 00:43:25.347 "enable_quickack": false, 00:43:25.347 "enable_placement_id": 0, 00:43:25.347 "enable_zerocopy_send_server": true, 00:43:25.347 "enable_zerocopy_send_client": false, 00:43:25.347 "zerocopy_threshold": 0, 00:43:25.347 "tls_version": 0, 00:43:25.347 "enable_ktls": false 00:43:25.347 } 00:43:25.347 } 00:43:25.347 ] 00:43:25.347 }, 00:43:25.347 { 00:43:25.347 "subsystem": "vmd", 00:43:25.347 "config": [] 00:43:25.347 }, 00:43:25.347 { 00:43:25.347 "subsystem": "accel", 00:43:25.347 "config": [ 00:43:25.347 { 00:43:25.347 "method": "accel_set_options", 00:43:25.347 "params": { 00:43:25.347 "small_cache_size": 128, 00:43:25.347 "large_cache_size": 16, 00:43:25.347 "task_count": 2048, 00:43:25.347 "sequence_count": 2048, 00:43:25.347 "buf_count": 2048 00:43:25.347 } 00:43:25.347 } 00:43:25.347 ] 00:43:25.347 }, 00:43:25.347 { 00:43:25.347 "subsystem": "bdev", 00:43:25.347 "config": [ 00:43:25.347 { 00:43:25.347 "method": "bdev_set_options", 00:43:25.347 "params": { 00:43:25.347 "bdev_io_pool_size": 65535, 00:43:25.347 "bdev_io_cache_size": 256, 00:43:25.347 "bdev_auto_examine": true, 00:43:25.347 "iobuf_small_cache_size": 128, 00:43:25.347 "iobuf_large_cache_size": 16 00:43:25.347 } 00:43:25.347 }, 00:43:25.347 { 00:43:25.347 "method": "bdev_raid_set_options", 00:43:25.347 "params": { 00:43:25.347 "process_window_size_kb": 1024, 00:43:25.347 "process_max_bandwidth_mb_sec": 0 00:43:25.347 } 00:43:25.347 }, 00:43:25.347 { 00:43:25.347 "method": "bdev_iscsi_set_options", 00:43:25.347 "params": { 00:43:25.347 "timeout_sec": 30 00:43:25.347 } 00:43:25.347 }, 00:43:25.347 { 00:43:25.347 "method": "bdev_nvme_set_options", 00:43:25.347 "params": { 00:43:25.347 "action_on_timeout": "none", 00:43:25.347 "timeout_us": 0, 00:43:25.347 "timeout_admin_us": 0, 00:43:25.347 "keep_alive_timeout_ms": 10000, 00:43:25.347 "arbitration_burst": 0, 00:43:25.347 "low_priority_weight": 0, 00:43:25.347 "medium_priority_weight": 0, 00:43:25.347 "high_priority_weight": 0, 00:43:25.347 "nvme_adminq_poll_period_us": 10000, 00:43:25.347 "nvme_ioq_poll_period_us": 0, 00:43:25.347 "io_queue_requests": 512, 00:43:25.347 "delay_cmd_submit": true, 00:43:25.347 "transport_retry_count": 4, 00:43:25.347 "bdev_retry_count": 3, 00:43:25.347 "transport_ack_timeout": 0, 00:43:25.347 "ctrlr_loss_timeout_sec": 0, 00:43:25.347 "reconnect_delay_sec": 0, 00:43:25.347 "fast_io_fail_timeout_sec": 0, 00:43:25.347 "disable_auto_failback": false, 00:43:25.347 "generate_uuids": false, 00:43:25.347 "transport_tos": 0, 00:43:25.347 "nvme_error_stat": false, 00:43:25.347 "rdma_srq_size": 0, 00:43:25.347 "io_path_stat": false, 00:43:25.347 "allow_accel_sequence": false, 00:43:25.347 "rdma_max_cq_size": 0, 00:43:25.347 "rdma_cm_event_timeout_ms": 0, 00:43:25.347 "dhchap_digests": [ 00:43:25.347 "sha256", 00:43:25.347 "sha384", 00:43:25.347 "sha512" 00:43:25.347 ], 00:43:25.347 "dhchap_dhgroups": [ 00:43:25.347 "null", 00:43:25.347 "ffdhe2048", 00:43:25.347 "ffdhe3072", 00:43:25.347 "ffdhe4096", 00:43:25.347 "ffdhe6144", 00:43:25.347 "ffdhe8192" 00:43:25.347 ] 00:43:25.347 } 00:43:25.347 }, 00:43:25.347 { 00:43:25.347 "method": "bdev_nvme_attach_controller", 00:43:25.347 "params": { 00:43:25.347 "name": "nvme0", 00:43:25.347 "trtype": "TCP", 00:43:25.347 "adrfam": "IPv4", 00:43:25.347 "traddr": "127.0.0.1", 00:43:25.347 "trsvcid": "4420", 00:43:25.347 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:25.347 "prchk_reftag": false, 00:43:25.347 "prchk_guard": false, 00:43:25.347 "ctrlr_loss_timeout_sec": 0, 00:43:25.347 "reconnect_delay_sec": 0, 00:43:25.347 "fast_io_fail_timeout_sec": 0, 00:43:25.347 "psk": "key0", 00:43:25.347 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:25.347 "hdgst": false, 00:43:25.347 "ddgst": false, 00:43:25.347 "multipath": "multipath" 00:43:25.347 } 00:43:25.347 }, 00:43:25.347 { 00:43:25.347 "method": "bdev_nvme_set_hotplug", 00:43:25.347 "params": { 00:43:25.347 "period_us": 100000, 00:43:25.347 "enable": false 00:43:25.347 } 00:43:25.347 }, 00:43:25.347 { 00:43:25.347 "method": "bdev_wait_for_examine" 00:43:25.347 } 00:43:25.347 ] 00:43:25.347 }, 00:43:25.347 { 00:43:25.347 "subsystem": "nbd", 00:43:25.347 "config": [] 00:43:25.347 } 00:43:25.347 ] 00:43:25.347 }' 00:43:25.347 14:41:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:25.347 [2024-11-25 14:41:30.378131] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:43:25.347 [2024-11-25 14:41:30.378193] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3768906 ] 00:43:25.608 [2024-11-25 14:41:30.463643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:25.608 [2024-11-25 14:41:30.492807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:25.608 [2024-11-25 14:41:30.635944] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:26.178 14:41:31 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:26.178 14:41:31 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:43:26.178 14:41:31 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:43:26.178 14:41:31 keyring_file -- keyring/file.sh@121 -- # jq length 00:43:26.178 14:41:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:26.438 14:41:31 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:43:26.438 14:41:31 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:43:26.438 14:41:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:26.438 14:41:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:26.439 14:41:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:26.439 14:41:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:26.439 14:41:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:26.698 14:41:31 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:43:26.699 14:41:31 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:43:26.699 14:41:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:26.699 14:41:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:26.699 14:41:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:26.699 14:41:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:26.699 14:41:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:26.699 14:41:31 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:43:26.699 14:41:31 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:43:26.699 14:41:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:43:26.699 14:41:31 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:43:26.959 14:41:31 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:43:26.959 14:41:31 keyring_file -- keyring/file.sh@1 -- # cleanup 00:43:26.959 14:41:31 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.IDTb6Arpxf /tmp/tmp.G4nqjdOnwI 00:43:26.959 14:41:31 keyring_file -- keyring/file.sh@20 -- # killprocess 3768906 00:43:26.959 14:41:31 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3768906 ']' 00:43:26.959 14:41:31 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3768906 00:43:26.959 14:41:31 keyring_file -- common/autotest_common.sh@959 -- # uname 00:43:26.959 14:41:31 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:26.959 14:41:31 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3768906 00:43:26.959 14:41:31 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:26.959 14:41:31 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:26.959 14:41:31 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3768906' 00:43:26.959 killing process with pid 3768906 00:43:26.959 14:41:31 keyring_file -- common/autotest_common.sh@973 -- # kill 3768906 00:43:26.959 Received shutdown signal, test time was about 1.000000 seconds 00:43:26.959 00:43:26.959 Latency(us) 00:43:26.959 [2024-11-25T13:41:32.049Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:26.959 [2024-11-25T13:41:32.049Z] =================================================================================================================== 00:43:26.959 [2024-11-25T13:41:32.049Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:43:26.959 14:41:31 keyring_file -- common/autotest_common.sh@978 -- # wait 3768906 00:43:27.219 14:41:32 keyring_file -- keyring/file.sh@21 -- # killprocess 3767046 00:43:27.219 14:41:32 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3767046 ']' 00:43:27.219 14:41:32 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3767046 00:43:27.219 14:41:32 keyring_file -- common/autotest_common.sh@959 -- # uname 00:43:27.219 14:41:32 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:27.219 14:41:32 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3767046 00:43:27.219 14:41:32 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:27.219 14:41:32 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:27.219 14:41:32 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3767046' 00:43:27.219 killing process with pid 3767046 00:43:27.219 14:41:32 keyring_file -- common/autotest_common.sh@973 -- # kill 3767046 00:43:27.219 14:41:32 keyring_file -- common/autotest_common.sh@978 -- # wait 3767046 00:43:27.479 00:43:27.479 real 0m12.131s 00:43:27.479 user 0m29.246s 00:43:27.479 sys 0m2.783s 00:43:27.479 14:41:32 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:27.479 14:41:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:27.479 ************************************ 00:43:27.479 END TEST keyring_file 00:43:27.479 ************************************ 00:43:27.479 14:41:32 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:43:27.479 14:41:32 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:27.479 14:41:32 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:27.479 14:41:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:27.479 14:41:32 -- common/autotest_common.sh@10 -- # set +x 00:43:27.479 ************************************ 00:43:27.479 START TEST keyring_linux 00:43:27.479 ************************************ 00:43:27.479 14:41:32 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:27.479 Joined session keyring: 655048985 00:43:27.479 * Looking for test storage... 00:43:27.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:27.479 14:41:32 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:43:27.479 14:41:32 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:43:27.479 14:41:32 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:43:27.741 14:41:32 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@345 -- # : 1 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@368 -- # return 0 00:43:27.741 14:41:32 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:27.741 14:41:32 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:43:27.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:27.741 --rc genhtml_branch_coverage=1 00:43:27.741 --rc genhtml_function_coverage=1 00:43:27.741 --rc genhtml_legend=1 00:43:27.741 --rc geninfo_all_blocks=1 00:43:27.741 --rc geninfo_unexecuted_blocks=1 00:43:27.741 00:43:27.741 ' 00:43:27.741 14:41:32 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:43:27.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:27.741 --rc genhtml_branch_coverage=1 00:43:27.741 --rc genhtml_function_coverage=1 00:43:27.741 --rc genhtml_legend=1 00:43:27.741 --rc geninfo_all_blocks=1 00:43:27.741 --rc geninfo_unexecuted_blocks=1 00:43:27.741 00:43:27.741 ' 00:43:27.741 14:41:32 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:43:27.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:27.741 --rc genhtml_branch_coverage=1 00:43:27.741 --rc genhtml_function_coverage=1 00:43:27.741 --rc genhtml_legend=1 00:43:27.741 --rc geninfo_all_blocks=1 00:43:27.741 --rc geninfo_unexecuted_blocks=1 00:43:27.741 00:43:27.741 ' 00:43:27.741 14:41:32 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:43:27.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:27.741 --rc genhtml_branch_coverage=1 00:43:27.741 --rc genhtml_function_coverage=1 00:43:27.741 --rc genhtml_legend=1 00:43:27.741 --rc geninfo_all_blocks=1 00:43:27.741 --rc geninfo_unexecuted_blocks=1 00:43:27.741 00:43:27.741 ' 00:43:27.741 14:41:32 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:27.741 14:41:32 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:27.741 14:41:32 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:27.741 14:41:32 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:27.741 14:41:32 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:27.741 14:41:32 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:27.741 14:41:32 keyring_linux -- paths/export.sh@5 -- # export PATH 00:43:27.741 14:41:32 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:27.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:27.741 14:41:32 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:27.742 14:41:32 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:27.742 14:41:32 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:27.742 14:41:32 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:27.742 14:41:32 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:43:27.742 14:41:32 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:43:27.742 14:41:32 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:43:27.742 14:41:32 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:43:27.742 14:41:32 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:27.742 14:41:32 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:43:27.742 14:41:32 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:27.742 14:41:32 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:27.742 14:41:32 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:43:27.742 14:41:32 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:27.742 14:41:32 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:27.742 14:41:32 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:43:27.742 14:41:32 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:27.742 14:41:32 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:43:27.742 14:41:32 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:43:27.742 14:41:32 keyring_linux -- nvmf/common.sh@733 -- # python - 00:43:27.742 14:41:32 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:43:27.742 14:41:32 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:43:27.742 /tmp/:spdk-test:key0 00:43:27.742 14:41:32 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:43:27.742 14:41:32 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:27.742 14:41:32 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:43:27.742 14:41:32 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:27.742 14:41:32 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:27.742 14:41:32 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:43:27.742 14:41:32 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:27.742 14:41:32 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:27.742 14:41:32 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:43:27.742 14:41:32 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:27.742 14:41:32 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:43:27.742 14:41:32 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:43:27.742 14:41:32 keyring_linux -- nvmf/common.sh@733 -- # python - 00:43:27.742 14:41:32 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:43:27.742 14:41:32 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:43:27.742 /tmp/:spdk-test:key1 00:43:27.742 14:41:32 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:27.742 14:41:32 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3769367 00:43:27.742 14:41:32 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3769367 00:43:27.742 14:41:32 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3769367 ']' 00:43:27.742 14:41:32 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:27.742 14:41:32 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:27.742 14:41:32 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:27.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:27.742 14:41:32 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:27.742 14:41:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:27.742 [2024-11-25 14:41:32.781346] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:43:27.742 [2024-11-25 14:41:32.781400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3769367 ] 00:43:28.002 [2024-11-25 14:41:32.865894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:28.002 [2024-11-25 14:41:32.897255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:28.572 14:41:33 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:28.572 14:41:33 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:43:28.572 14:41:33 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:43:28.572 14:41:33 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:28.572 14:41:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:28.572 [2024-11-25 14:41:33.602976] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:28.572 null0 00:43:28.572 [2024-11-25 14:41:33.635027] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:28.572 [2024-11-25 14:41:33.635398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:28.572 14:41:33 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:28.572 14:41:33 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:43:28.572 489687947 00:43:28.572 14:41:33 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:43:28.832 998024832 00:43:28.832 14:41:33 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3769678 00:43:28.832 14:41:33 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3769678 /var/tmp/bperf.sock 00:43:28.832 14:41:33 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:43:28.832 14:41:33 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3769678 ']' 00:43:28.832 14:41:33 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:28.832 14:41:33 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:28.832 14:41:33 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:28.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:28.832 14:41:33 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:28.832 14:41:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:28.832 [2024-11-25 14:41:33.713919] Starting SPDK v25.01-pre git sha1 9d382c252 / DPDK 24.03.0 initialization... 00:43:28.832 [2024-11-25 14:41:33.713967] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3769678 ] 00:43:28.832 [2024-11-25 14:41:33.795293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:28.832 [2024-11-25 14:41:33.824852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:29.772 14:41:34 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:29.772 14:41:34 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:43:29.772 14:41:34 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:43:29.772 14:41:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:43:29.772 14:41:34 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:43:29.772 14:41:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:43:30.032 14:41:34 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:30.032 14:41:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:30.032 [2024-11-25 14:41:35.024847] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:30.032 nvme0n1 00:43:30.032 14:41:35 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:43:30.032 14:41:35 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:43:30.032 14:41:35 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:30.032 14:41:35 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:30.032 14:41:35 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:30.032 14:41:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:30.293 14:41:35 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:43:30.293 14:41:35 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:30.293 14:41:35 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:43:30.293 14:41:35 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:43:30.293 14:41:35 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:30.293 14:41:35 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:43:30.293 14:41:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:30.555 14:41:35 keyring_linux -- keyring/linux.sh@25 -- # sn=489687947 00:43:30.555 14:41:35 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:43:30.555 14:41:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:30.555 14:41:35 keyring_linux -- keyring/linux.sh@26 -- # [[ 489687947 == \4\8\9\6\8\7\9\4\7 ]] 00:43:30.555 14:41:35 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 489687947 00:43:30.555 14:41:35 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:43:30.555 14:41:35 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:30.555 Running I/O for 1 seconds... 00:43:31.755 24479.00 IOPS, 95.62 MiB/s 00:43:31.755 Latency(us) 00:43:31.755 [2024-11-25T13:41:36.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:31.755 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:43:31.755 nvme0n1 : 1.01 24479.00 95.62 0.00 0.00 5213.41 3877.55 8301.23 00:43:31.755 [2024-11-25T13:41:36.845Z] =================================================================================================================== 00:43:31.755 [2024-11-25T13:41:36.845Z] Total : 24479.00 95.62 0.00 0.00 5213.41 3877.55 8301.23 00:43:31.755 { 00:43:31.755 "results": [ 00:43:31.755 { 00:43:31.755 "job": "nvme0n1", 00:43:31.755 "core_mask": "0x2", 00:43:31.755 "workload": "randread", 00:43:31.755 "status": "finished", 00:43:31.755 "queue_depth": 128, 00:43:31.755 "io_size": 4096, 00:43:31.755 "runtime": 1.005229, 00:43:31.755 "iops": 24478.999312594442, 00:43:31.755 "mibps": 95.62109106482204, 00:43:31.755 "io_failed": 0, 00:43:31.755 "io_timeout": 0, 00:43:31.755 "avg_latency_us": 5213.407731675268, 00:43:31.755 "min_latency_us": 3877.5466666666666, 00:43:31.755 "max_latency_us": 8301.226666666667 00:43:31.755 } 00:43:31.755 ], 00:43:31.755 "core_count": 1 00:43:31.755 } 00:43:31.755 14:41:36 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:31.755 14:41:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:31.755 14:41:36 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:43:31.755 14:41:36 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:43:31.755 14:41:36 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:31.755 14:41:36 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:31.755 14:41:36 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:31.755 14:41:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:32.015 14:41:36 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:43:32.015 14:41:36 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:32.015 14:41:36 keyring_linux -- keyring/linux.sh@23 -- # return 00:43:32.015 14:41:36 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:32.015 14:41:36 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:43:32.015 14:41:36 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:32.015 14:41:36 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:32.015 14:41:36 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:32.015 14:41:36 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:32.015 14:41:36 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:32.015 14:41:36 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:32.015 14:41:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:32.275 [2024-11-25 14:41:37.154288] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:32.275 [2024-11-25 14:41:37.154463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef6a20 (107): Transport endpoint is not connected 00:43:32.275 [2024-11-25 14:41:37.155459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef6a20 (9): Bad file descriptor 00:43:32.275 [2024-11-25 14:41:37.156461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:43:32.275 [2024-11-25 14:41:37.156469] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:32.275 [2024-11-25 14:41:37.156475] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:43:32.275 [2024-11-25 14:41:37.156481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:43:32.275 request: 00:43:32.275 { 00:43:32.275 "name": "nvme0", 00:43:32.275 "trtype": "tcp", 00:43:32.275 "traddr": "127.0.0.1", 00:43:32.275 "adrfam": "ipv4", 00:43:32.275 "trsvcid": "4420", 00:43:32.275 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:32.275 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:32.275 "prchk_reftag": false, 00:43:32.275 "prchk_guard": false, 00:43:32.275 "hdgst": false, 00:43:32.275 "ddgst": false, 00:43:32.275 "psk": ":spdk-test:key1", 00:43:32.275 "allow_unrecognized_csi": false, 00:43:32.275 "method": "bdev_nvme_attach_controller", 00:43:32.275 "req_id": 1 00:43:32.275 } 00:43:32.275 Got JSON-RPC error response 00:43:32.275 response: 00:43:32.275 { 00:43:32.275 "code": -5, 00:43:32.276 "message": "Input/output error" 00:43:32.276 } 00:43:32.276 14:41:37 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:43:32.276 14:41:37 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:32.276 14:41:37 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:32.276 14:41:37 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:32.276 14:41:37 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:43:32.276 14:41:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:32.276 14:41:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:43:32.276 14:41:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:43:32.276 14:41:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:43:32.276 14:41:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:32.276 14:41:37 keyring_linux -- keyring/linux.sh@33 -- # sn=489687947 00:43:32.276 14:41:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 489687947 00:43:32.276 1 links removed 00:43:32.276 14:41:37 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:32.276 14:41:37 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:43:32.276 14:41:37 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:43:32.276 14:41:37 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:43:32.276 14:41:37 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:43:32.276 14:41:37 keyring_linux -- keyring/linux.sh@33 -- # sn=998024832 00:43:32.276 14:41:37 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 998024832 00:43:32.276 1 links removed 00:43:32.276 14:41:37 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3769678 00:43:32.276 14:41:37 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3769678 ']' 00:43:32.276 14:41:37 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3769678 00:43:32.276 14:41:37 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:43:32.276 14:41:37 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:32.276 14:41:37 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3769678 00:43:32.276 14:41:37 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:32.276 14:41:37 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:32.276 14:41:37 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3769678' 00:43:32.276 killing process with pid 3769678 00:43:32.276 14:41:37 keyring_linux -- common/autotest_common.sh@973 -- # kill 3769678 00:43:32.276 Received shutdown signal, test time was about 1.000000 seconds 00:43:32.276 00:43:32.276 Latency(us) 00:43:32.276 [2024-11-25T13:41:37.366Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:32.276 [2024-11-25T13:41:37.366Z] =================================================================================================================== 00:43:32.276 [2024-11-25T13:41:37.366Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:32.276 14:41:37 keyring_linux -- common/autotest_common.sh@978 -- # wait 3769678 00:43:32.276 14:41:37 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3769367 00:43:32.276 14:41:37 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3769367 ']' 00:43:32.276 14:41:37 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3769367 00:43:32.276 14:41:37 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:43:32.276 14:41:37 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:32.276 14:41:37 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3769367 00:43:32.536 14:41:37 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:32.536 14:41:37 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:32.536 14:41:37 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3769367' 00:43:32.536 killing process with pid 3769367 00:43:32.536 14:41:37 keyring_linux -- common/autotest_common.sh@973 -- # kill 3769367 00:43:32.536 14:41:37 keyring_linux -- common/autotest_common.sh@978 -- # wait 3769367 00:43:32.536 00:43:32.536 real 0m5.192s 00:43:32.536 user 0m9.706s 00:43:32.536 sys 0m1.402s 00:43:32.536 14:41:37 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:32.536 14:41:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:32.536 ************************************ 00:43:32.536 END TEST keyring_linux 00:43:32.536 ************************************ 00:43:32.797 14:41:37 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:43:32.797 14:41:37 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:43:32.797 14:41:37 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:43:32.797 14:41:37 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:43:32.797 14:41:37 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:43:32.797 14:41:37 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:43:32.797 14:41:37 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:43:32.797 14:41:37 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:43:32.797 14:41:37 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:43:32.797 14:41:37 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:43:32.797 14:41:37 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:43:32.797 14:41:37 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:43:32.797 14:41:37 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:43:32.797 14:41:37 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:43:32.797 14:41:37 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:43:32.797 14:41:37 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:43:32.797 14:41:37 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:43:32.797 14:41:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:32.797 14:41:37 -- common/autotest_common.sh@10 -- # set +x 00:43:32.797 14:41:37 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:43:32.797 14:41:37 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:43:32.797 14:41:37 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:43:32.797 14:41:37 -- common/autotest_common.sh@10 -- # set +x 00:43:40.936 INFO: APP EXITING 00:43:40.936 INFO: killing all VMs 00:43:40.936 INFO: killing vhost app 00:43:40.936 WARN: no vhost pid file found 00:43:40.936 INFO: EXIT DONE 00:43:44.231 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:43:44.231 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:43:44.231 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:43:44.231 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:43:44.231 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:43:44.231 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:43:44.231 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:43:44.231 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:43:44.231 0000:65:00.0 (144d a80a): Already using the nvme driver 00:43:44.231 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:43:44.231 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:43:44.231 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:43:44.231 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:43:44.231 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:43:44.231 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:43:44.231 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:43:44.231 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:43:47.533 Cleaning 00:43:47.533 Removing: /var/run/dpdk/spdk0/config 00:43:47.533 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:43:47.533 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:43:47.533 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:43:47.533 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:43:47.533 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:43:47.533 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:43:47.533 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:43:47.533 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:43:47.793 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:43:47.793 Removing: /var/run/dpdk/spdk0/hugepage_info 00:43:47.793 Removing: /var/run/dpdk/spdk1/config 00:43:47.793 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:43:47.793 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:43:47.793 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:43:47.793 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:43:47.793 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:43:47.793 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:43:47.793 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:43:47.793 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:43:47.793 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:43:47.793 Removing: /var/run/dpdk/spdk1/hugepage_info 00:43:47.793 Removing: /var/run/dpdk/spdk2/config 00:43:47.793 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:43:47.793 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:43:47.793 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:43:47.793 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:43:47.793 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:43:47.793 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:43:47.793 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:43:47.793 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:43:47.793 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:43:47.793 Removing: /var/run/dpdk/spdk2/hugepage_info 00:43:47.793 Removing: /var/run/dpdk/spdk3/config 00:43:47.793 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:43:47.793 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:43:47.793 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:43:47.793 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:43:47.793 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:43:47.793 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:43:47.793 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:43:47.793 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:43:47.793 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:43:47.793 Removing: /var/run/dpdk/spdk3/hugepage_info 00:43:47.793 Removing: /var/run/dpdk/spdk4/config 00:43:47.793 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:43:47.793 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:43:47.793 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:43:47.793 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:43:47.793 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:43:47.793 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:43:47.793 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:43:47.793 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:43:47.793 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:43:47.793 Removing: /var/run/dpdk/spdk4/hugepage_info 00:43:47.793 Removing: /dev/shm/bdev_svc_trace.1 00:43:47.793 Removing: /dev/shm/nvmf_trace.0 00:43:47.793 Removing: /dev/shm/spdk_tgt_trace.pid3192960 00:43:47.793 Removing: /var/run/dpdk/spdk0 00:43:47.793 Removing: /var/run/dpdk/spdk1 00:43:47.793 Removing: /var/run/dpdk/spdk2 00:43:47.793 Removing: /var/run/dpdk/spdk3 00:43:47.793 Removing: /var/run/dpdk/spdk4 00:43:47.793 Removing: /var/run/dpdk/spdk_pid3191471 00:43:47.793 Removing: /var/run/dpdk/spdk_pid3192960 00:43:47.793 Removing: /var/run/dpdk/spdk_pid3193814 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3194850 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3195190 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3196269 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3196424 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3196740 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3197873 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3198573 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3198928 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3199257 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3199618 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3199960 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3200307 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3200663 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3200988 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3202122 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3205662 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3205957 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3206323 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3206455 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3206848 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3207163 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3207541 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3207741 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3207985 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3208251 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3208442 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3208629 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3209075 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3209429 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3209830 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3214367 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3219748 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3232363 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3233048 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3238435 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3238790 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3243881 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3250941 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3254252 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3266561 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3277715 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3280199 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3281215 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3302199 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3306974 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3363512 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3369903 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3377062 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3385090 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3385095 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3386538 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3387564 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3388569 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3389247 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3389249 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3389575 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3389594 00:43:48.054 Removing: /var/run/dpdk/spdk_pid3389599 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3390641 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3391659 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3392757 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3393362 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3393478 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3393715 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3395091 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3396467 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3406283 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3440788 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3446247 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3448113 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3450371 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3450708 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3451005 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3451178 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3451961 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3454136 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3455452 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3455959 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3458648 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3459360 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3460248 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3465149 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3472401 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3472402 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3472403 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3477104 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3487359 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3492171 00:43:48.315 Removing: /var/run/dpdk/spdk_pid3499404 00:43:48.316 Removing: /var/run/dpdk/spdk_pid3500902 00:43:48.316 Removing: /var/run/dpdk/spdk_pid3502738 00:43:48.316 Removing: /var/run/dpdk/spdk_pid3504261 00:43:48.316 Removing: /var/run/dpdk/spdk_pid3509965 00:43:48.316 Removing: /var/run/dpdk/spdk_pid3515179 00:43:48.316 Removing: /var/run/dpdk/spdk_pid3520217 00:43:48.316 Removing: /var/run/dpdk/spdk_pid3529817 00:43:48.316 Removing: /var/run/dpdk/spdk_pid3529937 00:43:48.316 Removing: /var/run/dpdk/spdk_pid3535180 00:43:48.316 Removing: /var/run/dpdk/spdk_pid3535346 00:43:48.316 Removing: /var/run/dpdk/spdk_pid3535530 00:43:48.316 Removing: /var/run/dpdk/spdk_pid3536136 00:43:48.316 Removing: /var/run/dpdk/spdk_pid3536199 00:43:48.316 Removing: /var/run/dpdk/spdk_pid3541584 00:43:48.316 Removing: /var/run/dpdk/spdk_pid3542406 00:43:48.316 Removing: /var/run/dpdk/spdk_pid3547605 00:43:48.316 Removing: /var/run/dpdk/spdk_pid3550942 00:43:48.316 Removing: /var/run/dpdk/spdk_pid3557425 00:43:48.316 Removing: /var/run/dpdk/spdk_pid3563879 00:43:48.316 Removing: /var/run/dpdk/spdk_pid3574180 00:43:48.316 Removing: /var/run/dpdk/spdk_pid3583291 00:43:48.316 Removing: /var/run/dpdk/spdk_pid3583347 00:43:48.316 Removing: /var/run/dpdk/spdk_pid3606205 00:43:48.316 Removing: /var/run/dpdk/spdk_pid3606894 00:43:48.316 Removing: /var/run/dpdk/spdk_pid3607676 00:43:48.316 Removing: /var/run/dpdk/spdk_pid3608469 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3609439 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3610245 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3611009 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3611694 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3616747 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3617086 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3624291 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3624505 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3631527 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3636549 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3648238 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3648915 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3653982 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3654329 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3659390 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3666340 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3669342 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3681936 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3692575 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3694579 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3695584 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3715197 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3719921 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3723098 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3730980 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3730985 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3737325 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3739691 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3742042 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3743265 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3745749 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3747055 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3757189 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3757681 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3758246 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3761175 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3761844 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3762389 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3767046 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3767086 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3768906 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3769367 00:43:48.577 Removing: /var/run/dpdk/spdk_pid3769678 00:43:48.577 Clean 00:43:48.837 14:41:53 -- common/autotest_common.sh@1453 -- # return 0 00:43:48.838 14:41:53 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:43:48.838 14:41:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:48.838 14:41:53 -- common/autotest_common.sh@10 -- # set +x 00:43:48.838 14:41:53 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:43:48.838 14:41:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:48.838 14:41:53 -- common/autotest_common.sh@10 -- # set +x 00:43:48.838 14:41:53 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:48.838 14:41:53 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:43:48.838 14:41:53 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:43:48.838 14:41:53 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:43:48.838 14:41:53 -- spdk/autotest.sh@398 -- # hostname 00:43:48.838 14:41:53 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:43:49.098 geninfo: WARNING: invalid characters removed from testname! 00:44:15.673 14:42:19 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:17.587 14:42:22 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:20.133 14:42:24 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:21.518 14:42:26 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:22.904 14:42:27 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:24.824 14:42:29 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:26.209 14:42:31 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:44:26.210 14:42:31 -- spdk/autorun.sh@1 -- $ timing_finish 00:44:26.210 14:42:31 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:44:26.210 14:42:31 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:44:26.210 14:42:31 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:44:26.210 14:42:31 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:26.471 + [[ -n 3106073 ]] 00:44:26.471 + sudo kill 3106073 00:44:26.483 [Pipeline] } 00:44:26.498 [Pipeline] // stage 00:44:26.503 [Pipeline] } 00:44:26.517 [Pipeline] // timeout 00:44:26.522 [Pipeline] } 00:44:26.536 [Pipeline] // catchError 00:44:26.541 [Pipeline] } 00:44:26.558 [Pipeline] // wrap 00:44:26.564 [Pipeline] } 00:44:26.577 [Pipeline] // catchError 00:44:26.586 [Pipeline] stage 00:44:26.588 [Pipeline] { (Epilogue) 00:44:26.603 [Pipeline] catchError 00:44:26.604 [Pipeline] { 00:44:26.618 [Pipeline] echo 00:44:26.619 Cleanup processes 00:44:26.625 [Pipeline] sh 00:44:26.994 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:26.995 3783277 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:27.038 [Pipeline] sh 00:44:27.353 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:27.353 ++ grep -v 'sudo pgrep' 00:44:27.353 ++ awk '{print $1}' 00:44:27.353 + sudo kill -9 00:44:27.353 + true 00:44:27.367 [Pipeline] sh 00:44:27.662 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:39.908 [Pipeline] sh 00:44:40.200 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:40.200 Artifacts sizes are good 00:44:40.216 [Pipeline] archiveArtifacts 00:44:40.224 Archiving artifacts 00:44:40.382 [Pipeline] sh 00:44:40.671 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:44:40.687 [Pipeline] cleanWs 00:44:40.698 [WS-CLEANUP] Deleting project workspace... 00:44:40.698 [WS-CLEANUP] Deferred wipeout is used... 00:44:40.705 [WS-CLEANUP] done 00:44:40.707 [Pipeline] } 00:44:40.724 [Pipeline] // catchError 00:44:40.736 [Pipeline] sh 00:44:41.023 + logger -p user.info -t JENKINS-CI 00:44:41.034 [Pipeline] } 00:44:41.048 [Pipeline] // stage 00:44:41.054 [Pipeline] } 00:44:41.069 [Pipeline] // node 00:44:41.074 [Pipeline] End of Pipeline 00:44:41.106 Finished: SUCCESS